 Hello, everyone. My name is Sai Krishna. I'm going to be talking about unconditional UC secure computation with stronger malicious puffs. This is joint work with Daksha Thakurana, Rafael Ostrowski, and Ivan Visconti. We're all sitting here in the audience. So before we get into our results, let's review what is secure two-party computation. Let's say there are two parties, P1 and P2 with inputs x and y respectively. Let's say they want to jointly evaluate some function f on their two inputs. So let's say they take part in some protocol with several rounds, several messages being exchanged, and finally each party learns the output of the function on both their inputs. Indulterably, what is the security guarantee we want from such a protocol? Let's say there is an adversary that corrupts the first party P1. We want to say that the adversary does not learn anything at all about the other input y. Apart from whatever it could have deduced just from the function output. So generalizing this, Kennedy introduced the framework of UC secure computation which we heard about in the previous talk. What does UC security says? It says that let's say there are several parties in the network interacting in several protocols, but perhaps computing different functions, and now let's say some adversary corrupts one party P2. Security requirement we want is that this adversary should not learn anything at all about any other party's input, apart from whatever it could have learned just from the function outputs. As you can imagine, UC security has numerous applications, and I'm not going to go into the details of what these are. But unfortunately, it is impossible to construct UC secure computation without a trusted setup assumption. So what are the setup assumptions that are commonly used in cryptography? One is the common reference string model in which general feasibility results were shown starting with the work of Kennedy and others. The other assumptions are trusted physical assumptions. One of which is the hardware tokens, which has been studied extensively, starting with the work of cats, and the focus of our work will be physically unclonable functions or pubs in short. So what are pubs? Pubs are introduced by Papu et al. You can think of it as physical deterministic device that is somewhat like a random oracle. What do I mean? A puff is a manifestation of a random oracle, such that if you query a puff with any input, the output that you get appears to be like a truly uniformly random string. Since it's a physical device, in order to evaluate the device, you need to have the device in your position. So it's a random oracle that is put inside a hardware chip that you need to hold it along with you in case you want to evaluate it. There has been several implementations of a puff. There's been designers trying to design more and more efficient puffs inside smaller chips and so on. So how is a puff designed at a very high level? They use a random physical process, and this results in a unique random function. What do I mean? That if you take two different puffs, the random function that is embedded inside each of them is completely different. I'm not going to go into the details of how they are created. But why should we as cryptographers care about these objects? Firstly, we know that the random oracle model has given rise to numerous applications and interesting results over the last couple of decades. So if you have something that is a physical manifestation of a random oracle, perhaps we can achieve new things in cryptography that we didn't know how to do earlier. Also, puffs are gaining a lot of popularity in several other fields. So perhaps we can leverage that in order to get very efficient protocols in real-world. So puffs were first studied in the context of cryptographic protocols by Bruce Kerr and others, and they required two properties from any ideal puff. The first is unpredictability, which as the name suggests, means that the output of a puff must be unpredictable. What do I mean? Let's say some party created a puff and now wants to query this puff on some value x. The output of the puff on x should be computationally indistinguishable from the output of a random oracle on the same input x. This means that the output of a puff is computationally indistinguishable from a uniformly random string. The second property we want from such an ideal puff is that it should be unclonable. Once again, as the name suggests, what this means is that let's say some party created a puff and sent it across to another party, then the sender who is a creator should not be able to create another copy of the puff that does exactly the same thing. In other terms, this means that you can evaluate a puff only if you have a physical copy of it with yourself. So these are the two properties you would need from any ideal puff. Just thinking again, we saw that a puff can be thought of as a physical manifestation of a random oracle, and in the setting of random oracles, Impagliazo and Rudich showed that key agreement is impossible if we allow only black box access to a random oracle. However, surprisingly, Ruska and others in the same paper showed that not just key agreement, but UC secure computation for general functions is possible unconditionally if we allow puffs. So this is quite surprising and probably this shows the difference between having a physical manifestation of a random oracle. Okay. So the notion of malicious puffs would introduce for Astrosky others. What does a malicious puff mean? In the previous setting, we saw that a puff that any party creates including an adversary should be both unpredictable and unclonable even to the creator. However, an adversary can do something more crazy and we want to strengthen the powers of an adversary. What we want to say is that if an adversary does create a puff, it might not be unclonable or unpredictable to the creator. For example, let's say the adversary who creates a puff embed some PRF key K inside the puff and sense it across to another honest party. Now, if this honest guy queries the puff with some input X, of course, the output, which is the output of the PRF, is going to be unpredictable to the evaluator. But then it does not remain unpredictable to the adversary because he knows that it's the output of the PRF with his own secret key. Once again, since he has the secret key, the key used in the PRF, he can of course create another puff having the same PRF key. In fact, he did not even create another puff in order to query the puff, he can just evaluate the PRF itself. In short, what do I mean? I want to say that an adversary can easily clone such a puff. Therefore, a malicious puff may neither be unpredictable nor unclonable. However, in this model, we still want an honestly generated puff by any honest party to remain unpredictable and unclonable. Further additionally, the adversary might also have the power to create stateful malicious puffs. That means that suppose an adversary sense across a malicious puff to an honest party and this party queries the puff with let's say two strings X and Y. The puff might now store these strings inside it, record it and then later, when the protocol demands that the puff be sent back to the adversary, the adversary can just look into the puff and see what these queries were. Now, as you can imagine, this is detrimental to achieving security because the adversary literally knows almost all the secret inputs of the honest party. However, once again, the same work Ostrowski and others showed that UC secure computation is still possible if we allow malicious puffs to be created. However, they required further additional computational assumptions. This gives rise to the natural question of can we achieve unconditional UC secure computation in the setting of malicious puffs? This was the question studied by Dakman Solid and others in their work, and they show two results. The first is that unconditional UC secure computation is in fact possible if malicious puffs are stateless. On the other hand, it's impossible if malicious puffs are stateful. Now, this looks like the result is tight and this ends the line of work along this aspect if you just want to focus on feasibility results. But then we notice in our work that the impossibility result only holds if the puff can maintain an unbounded amount of state. But we know that a puff is a physical object and any physical object should have some finite size associated with it. Therefore, the amount of bits that the puff can store should be at most upper bounded by its size. This is the starting point of our work where we look at the stateful malicious puffs that can have an a priori bounded state. In this setting, we give a construction of an unconditional UC secure computation protocol. We then consider a new model where the adversary is given further power than in the previous malicious puff model. We define the model known as encapsulated puffs, which at a high level says that an adversary can put one puff inside another and then transfer this new global bigger puff onto other parties. It turns out that all the previous feasibility results are insecure if we allow the adversary to encapsulate one puff into another. Once again, we give a construction of an unconditional UC secure protocol in this setting. Before I get into our construction, the details of our construction, let's review the security definition of any two-party computation protocol. In the left side, there is a real world in which an adversary interacts with an honest party. The adversary has some input x, the honest party has some input y, the honest party has some input x, and they both engage in a protocol execution. Then in the right side, we have an ideal world where once again, there's an adversary with input y, with input y, and honest party with input x. Additionally, there are now two more entities. One is a trusted functionality for computing the function f, and there is a simulator who is denoted by Dumbledore here. And now, once they have their respective inputs, the honest party sends its input over to the trusted functionality. The adversary now engages in a protocol execution, not with the honest party, but with the simulator. And at some point in the protocol, the simulator decides that it has extracted the adversary's input. So it has extracted some value y star, which may potentially be different from the adversary's input y, and sends this value over to the ideal functionality. The functionality responds back to the honest party and the simulator with their respective outputs, and the simulator continues an execution of the protocol with the adversary. So what is the security requirement? We want to say that the adversary cannot distinguish whether he's playing in the left world or in the right world, that is the real world or the ideal world. Before we get into the details of our construction, let's review what the oblivious transfer functionality is, and why is this important? Because we know that oblivious transfer is enough. If you can securely realize oblivious transfer, it's enough to securely realize any two party computational functionality. This was shown in two works by Phillion first and then later by Ishii Prabhakar inside. So what is oblivious transfer? Let's say the sender with two inputs M0 and M1, and a receiver with input B. At the end of the protocol execution, the receiver should learn the value MB, and the sender does not have any output. The security requirement is that the receiver should not learn the other message of the sender, and the sender should not learn the receiver's choice bit. So let's, before we get into our construction, let's revisit the OT protocol with malicious, stateless puffs of Dakman, Solid, and others. So once again, the sender has inputs M0 and M1, the receiver has input B. The sender sends a puff across to the receiver. The receiver queries the puff with some string C that is picked uniformly at random, and the string has length N where N can be thought of as a security parameter. The sender's input messages are also of the same length. After this, the receiver sends the puff back across to the sender. The sender now picks two random strings, X0 and X1, and uniformly random strings, and sends them over to the receiver. The receiver now exhausts its initial random string C with XB, and sends it back to the sender. So I'll be using plus throughout the rest of the talk to denote XR. The sender now computes two strings S0 and S1, where each string is just masking his respective input with the output of a puff. That is, S0 is M0, marched with the output of the puff on Vxor X0, and similarly, S1 is M1, marched with the output of the puff on Vxor with X1. And he sends these two across to the receiver. Now how does the receiver get its output? The receiver computes MB as SBxor with R. Why would this work? Let's see what SB is. SB is MB marched with the output of the puff on XBxor XB, and which is just the output of the puff, which is just MB marched with the output of the puff on C. Notice that the receiver already knows the value of puff of C, so he can receive the output. So this protocol is very simple and elegant, and I guess this is the starting point for our work. Intuitively, why does a malicious receiver not learn the other message of the sender? Because the value M1 minus B is marched by the output of the puff on Cxor X0, XR, X1. It is crucial to note that the receiver does not learn X0 and X1 until he has returned the puff back to the sender. Now by the unpredictability and unclonability of the puff, he cannot learn the value of the puff on this input stream. Let's look at the proof of security against a malicious sender. So let's say we have a sender with inputs M0 and M1 who is interacting with the simulator, and the goal of the simulator is to extract these inputs of the sender. So the simulator receives a puff from the sender, queries the puff on a random string C, sends it back, and proceeds the execution as in the normal protocol, that is the sender sends two strings X0 and X1, maybe random, maybe not. The receiver, who is the simulator here, sends back V as the Xor of C, and one of these two strings, either X0 or X1, and then receives two strings S0 and S1 in the final round. Now the first observation is that the two messages, M0 and M1, appear only in the last round in the two strings S0 and S1. Therefore, the simulator, in order to extract M0 and M1, must in fact extract them from this last round. So how does he go about extracting them? Observe that in order to extract M0 and M1, he has to recover the masked values, and from the previous slide, recall that one of the masked values is the output of the puff on the random string C that he already queried on, and the other masked value is the output of the puff on C, Xor, X0, Xor, X1. So if he knows both these values, he can extract the two messages. However, notice that the simulator does not have access to the puff when he knows the values of X0 and X1. Therefore, in this proof, in the proof of Dakman's solid and others, in order for the simulator to extract the two messages, it needs to query the puff after it has been sent back to the adversary. While this might not be an unreasonable restriction, you can think of a malicious adversary that simply destroys the puff once it gets it back. If the puff has destroyed, then we cannot hope for the simulator to be able to query the puff, even though it doesn't have possession. So how do we go about solving this? In our work, we fix it by forcing the sender right in the first round to send an extractable commitment of the two random strings X0 and X1. We know how to build extractable commitments from just puffs by the work of Damgar and Scafuro. And now notice that a simulator can learn the values of X0 and X1 just from this extractable commitment, whereas a malicious receiver cannot. This follows from the hiding of the commitment. And so now once the simulator knows the values X0 and X1, he can query the puff on both C, as well as CX or X0 or X1 right in the first round before he has to return the puff. So then he can successfully evaluate, he can successfully extract both the sender's messages and the simulator wins. So this provides us some good starting point to base our secure protocol with stateful puffs and let's see what are the potential attacks that we could launch if the puff was stateful. The first attack is as follows, consider an adversarial sender who's interacting with the simulator and after he's given out the extractable commitments, he gives out the puff and does the following. Recall that a simulator has to query the puff on two strings C as well as CX or X0, X1. And the adversarial puff can now notice that these two queries are not completely independent. That is they both correlated because if you exor these two strings you get X0, X1 and the puff in fact knows the two strings X0 and X1. That is, since the adversary can know this correlation it can simply decide to not respond on the second query. If the puff does not respond on the second query the simulator loses and the adversarial Waldemar wins. So this is just one kind of attack. Let's think of, let's look at another attack that a stateful puff could launch. Once again the adversarial sender sends the puff across. Now when the honest receiver queries the puff on some input string C, the puff just stores this value C inside it, does not do anything fishy. And then when the sender gets the puff back he just looks into the puff and sees what the query C was. Now this is detrimental because he can then compute the value of XB as just C XOR with V and you can check whether XB equals X0 or X1 and learn whether the receiver's bit was C or one. So this is just two sample attacks to give you an intuition of why bounded stateful puffs are more difficult setting. But we notice that these two attacks can be generalized to encompass all possible attacks that a stateful puff could launch, a bounded stateful puff could launch. So the first kind of attacks which we saw earlier were the kind where a malicious puff bases its output depending on the previous input queries that it did receive. And we use some special coin tossing combined with some cut and choose to solve this issue and I won't get into the details. The second kind of attacks are when a malicious puff records the queries that are sent to it and then leaks them back to the creator when it's sent back to the sender. Now how do we go about solving this issue? Let's say that the state of a puff is a priori bounded by some value L where L is some polynomial in the security parameter. So now we're gonna perform two L oblivious transfer protocols between the sender and the receiver using the exact same puff. Now you can see that this translates to a setting of a one-sided malicious OT extractor. What do I mean is that the sender learns an L bit leakage function of the receiver's two L choice bits. I'll explain it in more detail. So the sender and the receiver take part in two L oblivious transfer protocols and at the end of the protocol, the receiver's two L choice bits give a leakage to the sender that is the sender gets an L bit output string based on some arbitrary leakage function. So now we give a construction of a new, we give a new construction of a malicious OT extractor in this setting. And the end product is that we get a leak-free secure OT and this helps us to overcome it. And it won't get into the details of our construction but I'll refer you to the paper for that. So now let's go back to our new adversarial model of encapsulated puffs. So let's say there's a sender who sends a puff across to the receiver and after several rounds in the protocol, the receiver has to send back the puff. But now this receiver is malicious in our setting here and instead of sending the same puff back, he sends back this new encapsulated puff. What do I mean by this encapsulated puff? So if the sender queries, for example, suppose the sender queries it on some string starting with zero, it answers back with the same puff but if he queries it on a string starting with one, it computes some other arbitrary function. This other function could be some other different puff or it could be a set of several other puffs that are embedded inside. So this is the stronger adversarial model. Unfortunately, I don't have time to get into the details. Let me just conclude by saying that we look at several new models of puffs and give feasibility results and puffs are gaining a lot of popularity in the world. I think it's important for us as cryptographers to come up with more stronger models to capture these attacks and another direction that we could focus on is improving efficiency and round complexity of constructions based on puffs. And hopefully the goal is that eventually we will help us bring theory closer to practice. Thank you. Thank you. We have a little bit of time for questions. So usually I might be wrong, but at least in the previous paper, Bruce can solve puffs were not modeled like random oracle but something which if you read it repeatedly, you get like something which requires error correction and is also not uniform, right? So you need like to use like fuzzy extractors and stuff. Do your techniques trivially like easily generalize so there are complications? So the answer is yes. So if you take definition from their paper, it's enough. More questions? Do you have any result that show that if you have like an L bounded puff, then the protocol has to be something proportional to L necessarily? No, we don't have any lower bound results and probably that's a good question.