 is Physically Unclonable Functions in the Universal Composition Framework, Christina Broussouska, Mark Fishlin, Haike Schroeder, and Stefan Katzenbaiser. And Christina will give the talk. Phil, I think that people are petrified from you, the speakers. Nobody is going over. They're all under. Central, of course. Yeah, so I'm Christina Brouska. This is joint work with Mark Fishlin, Haike Schroeder, and Stefan Katzenbaiser. We are all from TU Darmstadt. And so the topic is Physically Unclonable Functions in the Universal Composition Framework. There's a lot of UC stuff in the paper, but I tried to keep the UC part small for the talk so that you don't need to be familiar with the Universal Composition Framework to follow the talk. OK, so Physically Unclonable Functions are hardware tokens which get an input, a challenge of stimulus, and produce a fuzzy output. And there are a lot of implementations for PAFs. And they satisfy different security properties. So not all of them satisfy the same security properties. Security properties are somewhat informal. So if you want to use PAFs for cryptographic applications, you would like to have a formal security model so that you know what the formal security properties are and that you can use them to construct provable secure schemes. So the formal security model should, of course, cover what like most PAF constructions actually achieve. And that's what our paper is on. So providing such a formal security definition and several cryptographic applications. So you can't think of PAF like for now as a random function in a box that you can send to another party. And then you can use this to construct a cryptographic scheme. What we did was we constructed the QE government scheme, oblivious transfer scheme, and a commitment scheme. And the nice thing about the commitment scheme is that just on the way we found a run around transformation from oblivious transfer to commitment schemes. When compared to come and cut and choose techniques, that's a very efficient transformation, which we just discovered on the way. So that was very nice. So just for those who are familiar with the universal composition framework. So you can think of PAFs as being like non-programmer or random articles. And then there is something very surprising about building a UC secure commitment scheme in the universal composition framework. Because Kennedy officially showed in 2001 that you cannot have UC secure commitments in the plain model. And the proof also extends to the non-programmer or random article model. So that we could actually do this from PAFs was a bit surprising. Just showed that there is a difference in that you can do even more with PAFs than you can do with non-programmer or random articles. So physically unclungable functions, hard watch auctions, which get a stimulus usually called a challenge. And it's a randomized procedure which produces an output. So if you plug in the same value twice, you do not necessarily get the same output. So the measurement is somewhat fuzzy. But it's fuzzy in a controlled way. So the noise is bounded until you have some control about the output. The second property that PAFs usually have is if you take a fresh challenge value, then you have some entropy about the output. So these two properties are achieved by most PAF constructions. But there was something else that we needed for our applications. That was a super polynomial big input domain. That's not achieved by all PAFs, but by some PAFs. So if you say bounded noise and high entropy for fresh challenge values, that's the definition according to what all PAFs achieve. And that's our definition for what we needed for our applications. So why do we need the super polynomial big input domain? So this is because we consider active attackers. And so Bob wants to send a path to Alice. And then I use this function to, for example, to communicate with Alice in a secure way. So Bob sends the path to Alice. But on the way, the attacker gets to measure the path. And then the path gets to Alice. So if the input domain is small, then the adversary can measure the whole path. And there is not much left, which is secret between Bob and Alice, which the attacker does not know. So that's why we needed a big input domain. Like for weaker attack models, you can still use the PAFs with the small input domains. So our input domain is big. OK, so I said we wanted to have a random function in box. But the problem is that PAF is not a function because the measurement is fuzzy. And the other problem is that you get high entropy outputs, but you don't get uniform outputs. So that's where fuzzy extractors come into play. So fuzzy extractors do both things at the same time. So they do the error correction for you. And they smooth out the entropy. So then, actually, if you put together the PAF and the fuzzy extractor, this is almost like a random function in a box. Well, there is some subtlety. So if two input values are close, you might actually get close values. But if input values are far away from each other, then you're fine. So what are the main three properties that we needed of this PAF plus fuzzy extractor? So the first one is correctness, so that the PAF plus the fuzzy extractor actually gives you a function. The second property is the well-spread domain property. That's why the domain needed to be big. So if you measure the PAF a couple of time and then draw a random challenge, then usually the challenge is far away from the ones that you measured previously. They were not too many, of course. And in this case, you don't know anything about the output of a fuzzy extractor plus PAF. So the output is actually uniform. So let's have a look at key agreement as a warm-up. So we have a key agreement scheme where Bob measures the PAF on a challenge C, gets a response R, applies the fuzzy extractor, and then derives the key K. Then Bob sends over the PAF to Alice on the way the attacker gets to measure the PAF. And then at some point, the PAF gets to Alice. And then Bob sends over the challenge to Alice and information about the fuzzy extractor. As we've worked before, there's some public information about fuzzy extractors that you always need to transmit. And then Alice can evaluate the PAF on this challenge, get a response R prime, which is close to the original one, and then apply the fuzzy extractor with the information that Alice got from Bob and derive the key K. So why does Alice actually derive the same key? So that's due to the correctness property of this construction PAF plus fuzzy extractor. And why is the scheme secure? So if you look at the scheme, at the point where the adversary gets to measure the PAF, the challenge C is information theoretically hidden from the adversary. So due to the well-stread domain property, the challenge C is far away from all the values that the adversary measured. And in this case, we have this uniform outputs property, which said that in this case, the value K looks uniformly at random from the point of view of the adversary. Okay, so I promised the one-round transformation from oblivious transfer to commitment schemes. I will then, after the slide, show how to build the oblivious transfer scheme from PAFs. But I'd like to show this one-round transformation first. So these boxes represent a protocol, so an oblivious transfer protocol. Their Bob has two secrets, S0 and S1, and Alice has a secret bit B. And so the secret bit B and the two secrets of Bob are used in the protocol, and in the end of the protocol, Alice derives the secret SB. So the security properties of oblivious transfer are that Bob does not learn anything about the secret bit B of Alice, and that Alice can only derive one of the secrets. A commitment scheme is a scheme which operates in two phases. So in the first phase, Alice commits to the bit B, and in the second phase, Alice can open her commitment, and Bob learns the bit B. So the security properties about the commitment scheme are that in the first phase, the bit B remains hidden from Bob, and that in the second phase, Alice cannot cheat and say, yeah, my bit was one minus B. So if you look at these two protocols, there are some similarities. So for instance, in the oblivious transfer protocol, the bit B remains secret. That's the same thing as, you have there in the commitment protocol, there the bit B should remain secret in the first phase of the protocol. And so if you look at the oblivious transfer protocol, Alice only learns one of the secrets, so the one which corresponded to her secret bit B. So if you say that the oblivious transfer protocol up there is the first, so okay, is the first, the committing phase of the commitment protocol, then Alice can open the commitment by sending back the secret to Bob. And she cannot cheat because if the secret that Bob used was long, then Alice doesn't have very low probability of sending the right secret S one minus B to Bob. So the main idea of this transformation was like to swap the role of the receiver and the role of the sender. And yeah, then was one message you can get from oblivious transfer to commitment schemes. Okay, so we'll go through our oblivious transfer protocol. There, okay, Bob has two secrets, S zero and S one, Alice has a secret bit B, Alice draws a random challenge and measures the path on it, then sends over the path to Bob. Bob draws the random values, sends them over to Alice, and then Alice chooses one of the values according to our secret bit B and blinds them with the random challenge that, with the random challenge. Then, so now if, so Alice only evaluated the path on C. So Bob will now use the value C to compute a blinding value to send over the secrets to Alice. But Bob does not know whether Alice chose X zero or X one. So Bob just computes a blinding value for both cases. So X or X zero to the value he received and computes a blinding value ST zero and X or X one to the value he received to compute another blinding value. And then Bob sends over his two secrets, blind it with it to blinding values and information about the path extractor and then Alice can compute the blinding value for XB. And so security against Alice comes from the path properties. So you need that Alice can only compute one of the blinding values. That's roughly because the values X zero and X one are random and Alice cannot predict the value that she needs to measure the path on. And then security against Bob so that the secret bit B remains secret from Bob. That's information theoretic because the only information that Bob receives about the secret bit, well, there's nothing. He just receives a random value, which is XB, XRC. And so if you recall the commitment scheme that we built out of the oblivious transfer scheme, that the secret bit remains hidden was one of the important properties and that the security is information theoretic here helped actually a lot to make the commitment scheme not only UC secure, but even secure against adaptive corruptions. That was pretty nice. Okay, so what we did was we provided the definition for physically uncloneable functions which was bounded noise, high enthalpy for fresh input values and super polynomial big input domain. We put a fuzzy extractor on top of it and got the properties of correctness, well-spread domain and uniform outputs. And we got several efficient, provable secure properties without cryptographic assumptions. So only with the assumption of having a good path. Which are key agreement, oblivious transfer and commitments and on the way we found a one-right transformation from oblivious transfer. What you can keep in mind is that paths plus fuzzy extractors are somewhat like non-programmable random articles but apparently you can even do more with paths. So I wanted to put pictures of my co-authors here because not all of them can be here today. And yeah, thank you for your attention. Thank you very much. Are there any questions? Okay, so, oh, yes, please. Right, I did not get the advantage why are paths defined as being fuzzy? Why don't you have a deterministic path? What's the point of being fuzzy? Well, the paths that exist are just fuzzy. So it would be nice, of course, it would be much nicer to have paths which are a function actually but they're not. All right, so paths actually do exist at the moment. Yeah. All right. So thank you again to all the speakers in the session. We now have a break at 11 o'clock.