 Thank you for the introduction. This is my job with Nishan Chandran, Rafael Ostrowski, and Ivan Visconti. This is a work done by why I was at UCLA. So let's start with the MPC, which we probably have seen several times in this conference already. So we have a number of parties with their own input, and they want to jointly compute some function based on this input. So they want to hide their own input from other parties. So basically, they will exchange some messages in order to compute the output of the function. And there's also an adversary that will corrupt some number of parties and try to seal the input of their next parties and try to make the computation fail. So in this work, we consider a malicious adversary where the adversary can make the corrupted party, they work from the protocol. And we also consider the UC security. In this work, we consider the UC security, which is a very strong notion of security. Unfortunately, UC security cannot be achieved for some functionality in the plain model. So we consider an MPC with a trust assumption. So here we have some trusted party to generate some setup for the parties to use for the MPC. For example, we can have a common reference string which all the parties has access to. And then they can use that to compute the message for the MPC protocol. Another trust assumption that we focus on in this talk is the Temperproof Hardware Token. So here, a party will come up with a program P and then embed it in the token and then send it to another party. So the receiver of the token can execute this program P on any input they want as many times as they want without the sender knowing anything about this input or about the execution. At the same time, the receiver will not learn anything about the description of the program P. And a number of parties can construct this token and then send it around to execute the MPC protocol. So this is the Temperproof Hardware Token model by CAS. So formally, there is a token functionality that when a party wants to create a token for program P, he will send a message to the token functionality with the description of the program P. The functionality will memorize this description and then the receiver can ask the functionality to execute program P on any input they want and then he will save PX in return. The problem is that in real world, how do we actually create this Temperproof Hardware Token? So ideally, we want the sender of a token to come up with a program P and then manufacture the token himself. So in this case, he can just create a token and then send it over to the receiver. In reality, not everyone can actually create a Temperproof Token. That's not something that we can do by ourselves. So we use a third party manufacturer to create this token. So the sender will send the description of P to the third party manufacturer and then receive a token to send to the receiver. The problem here is that the third party manufacturer may be corrupt. So in this case, the adversary can learn the description of P that is sent to the manufacturer. And he can also collude with the receiver to break the security of the trust setup, which is the Temperproof Hardware Token. The sender may need to choose among many hardware manufacturers which one he can trust, which one is and which one is corrupt. Another problem is that the corrupt manufacturer can also replace the program P with a different program to undermine the computation of the NPC. So the question that we would like to answer is, can we obtain the UC security hardware-based security in the world where most hardware token manufacturers can be corrupt? And our answer is yes. So we construct a protocol that UC relies on the Temperproof Token functionality with a board in the corrupted token model, assuming the existing standoff one-way function. So this is our main result. And we need to explain what is the token functionality with a board and the corrupted token model. So this is the setup that we are interested in. We have several manufacturers, some of which can be maybe corrupt and some of manufacture on this. But both sender and receiver doesn't know which one is corrupt. So when the sender wants to send a token that invades a program P, he will create a bunch of programs in this example, P1 of 2, P5, each of which doesn't reveal the program P by itself. And then he will send each one to each manufacturer and then receive a number of tokens, some of which can be corrupt. Then send all of them to the receiver. The receiver will use all these tokens and then execute them in a way that he can get the output Px. And this will be our model. And it's also possible that the receiver is also corrupt and collude with the corrupted manufacturer. In this case, we still want to guarantee that P is still hidden. So essentially, we want a guarantee that if at least one manufacturer is on this, it's not corrupt. Then the adversary cannot learn anything about P. And we can see that if all of them are corrupt, then the adversary can actually compute, can learn all the descriptions and then compute, and then the description of P. On the other hand, there is a limitation of this model. Basically, suppose the adversary corrupts all but one manufacturer, say P2, say the second one, which the sender sends P2 to. And then just replace the program with garbage. Then what happens is that if we want to realize the actual token functionality, that means P2 by itself must be able to compute P. At the same time, the adversary can actually corrupt the manufacturer that will get P2. In that case, that means that P2 by itself can compute P but not reveal P, which is essentially a black box obfuscation. And because we cannot have that, that means we allow that in that case, all we need is that the token creation is going to be a failure as a whole. And in this case, we can say that the receiver we cannot define in a board. And that's essentially the token functionality with a board. So here it's very similar to the standard token functionality. We have the sender send a message to the functionality to create a token with program P. But the token functionality instead will notify the adversary first. The adversary has a choice to interrupt this creation. So in that case, the receiver, we can notify that the token creation has failed. And he will not be able to execute P on any input. The adversary can also choose to ignore this creation. And then in this case, just like the standard token functionality, the receiver can execute P on any input X. And this is the functionality that we want to achieve in the corrupted token model. So what is the corrupted token functionality? So here it's also pretty similar. We have the sender want to create a token for program P. So the adversary will get notified. But instead of just interrupting, by the way, I forgot to mention one thing. In the previous model, the token functionality with a board, the adversary will not know P at all. But in this case, if the adversary choose to corrupt this creation of the token, he will actually get P. And then he can replace P with any different program P prompt, or the same program if he want. And in this case, the receiver will not get notified at all. And when he think that he execute P on input X, he will get P prompt back in state. The adversary can also choose to ignore the creation. And in this case, he will not get P at all as well. So the corruption is done one by one. So if the sender construct try to create a multiple tokens, the adversary can choose to corrupt some of the token, but not the others. And then he cannot change his mind later. So after he choose not to corrupt a token, say P2, he will not be able to change his mind later in the protocol to corrupt P2 again. So P2 is now actually tamper proof. So this is the model that we want to, that our protocol will begin. And we want to achieve the token functionality with a bot in this model. So here is the overview of our solution. So let Pi be the description of the program P. So the sender will use an off-end secret sharing to create the share of Pi. Basically, he will get Pi1 up to PiN. When N is the number of tokens, he wish to create to represent a single program Pi. Basically, he believed that among this N manufacturer, at least one of them will be on this. And then he will create a correlated randomness and then embed it into the token. In this example, five tokens. So each of the tokens will get the secret share of Pi. And then together with the correlated randomness to do some computation, the receiver, when he want to execute Pi on input X, he will give X to every token. And then this token will perform some NPC among each other. So the receiver will be the one who deliver the message between the tokens. And this can be done adversarily if the receiver is corrupt. So this NPC will reconstruct Pi from an off-end share and then execute the program on the input X. So this is the overview of our solution. So how do we achieve this? There are two things that we need. First, the one that actually execute the NPC will be the token. So that means we need to have some way to prevent the resetting attack. So what we need is simultaneous resettable serial knowledge argument in the correlated randomness model. So by this, I mean the soundness guarantee must hold even when the prover can reset the verifier. And the serial knowledge property need to hold even when the verifier can reset the prover as many times as he want. So this will be the first thing that we need to construct. Yes, this is what they just said. The second thing I already mentioned is the UC secure NPC for the tokens to run. And this will also be in the correlated randomness model. So when we combine these two things that we construct, we get the final result, which is the protocol that UC relies on the token functionality with a board in the corrupted token model. And as I mentioned earlier, our result will be based on one-way function. So we want to construct each of these two using one-way function only. So if you go a little bit quickly, because I only have eight minutes left. So first, in order to construct the simultaneous resettable serial knowledge argument, we will start with a three-route public-coil serial knowledge protocol in the CIS model, for example, the one by Mackenzie and Yang. So this protocol based on one-way function and has a spring-line simulator. And then we will switch over to the correlated randomness model when we also add a secret key for a symmetric key encryption, and then the commitment. So the prover will get both commitment methods and the commitment information. So we can prove using the serial knowledge argument in the previous slide that there exists a witness and a secret key that can decrypt the first message to the witness. So this resulting protocol, it turns out that this resulting protocol will be the serial knowledge argument of knowledge in the correlated randomness model with spring-line simulator and still have three routes. In the second step, we generate more correlated randomness. Basically, we'll have another commitment in the opposite direction together with some random string, which are the D and the S. So the prover will use the serial knowledge argument of knowledge in the previous slide, which is three routes. But instead of the standard second message, the verifier will use the PRF to generate this prompt instead. And then he will use simultaneous-resetable-witness indistinguishability argument to prove that either T can be decommitted to S, and our prompt is generated this way. Or there exists a D prompt such that D is a PRG applied to D prompt. And this simultaneously-resetable-witness indistinguishability argument already exists, which is a work done by Chang Rostrovsky, Paas, and Visconti. So if you want to get the details on how we prove the security here, you can refer to the full paper, not that we still have a string-like simulator here. So the second component of what we need can be constructed from two protocols. The first one is the UCCQMPC in the OT hybrid model, which is done by Ishai Prabhakaran and Sahai. And the second part is the OT in the coordinate randomness model. This is essentially an OT extension, but with a special thing that we need to add on here, which is this is for unbound number of OT. Why do we need unbound number of OT? Because we will embed all this coordinate randomness in the tokens. But we do not know how many times that these tokens will get executed. So we cannot expect the number of OT that will be used, that we need to embed in the token. So we need a little bit of modification to the standard OT extension. So here's the pre-order of the Biver OT extension. The receiver have a short seed S, and then use the PRT to expand it to a long string. And this string will be used to select the bits from the sender. And the left part create as a Kabul circuit, and then send to the receiver. And the small number of OT will be used to send the Kabul input for the short seed S to the receiver. So in our modification, the sender and receiver will have some short seeds S1, S2, and S3. And then they will exchange some commitment, note that the upper part is done as a correlated randomness by the trusted parties. And instead of the PRT, they input the seed of the receiver together with the session number, which I denote by J. We will put through the PRF, which has a seed of S1. And then the left-hand side again is garbled and sent to the receiver. Except that, in this case, the receiver will only get the Kabul input for S2, but not J. And then in the J session, when the receiver want to get a new additional OT, the sender will create a new Kabul circuit, except that the part that corresponds to S2 will remain the same. So he don't have to use the OT to send over the Kabul input for S2 again. And this allows the receiver to repeat the session as many times as he wants. So the sender also needs to use journalist to prove that he actually computes everything according to the scheme. So now we want to put everything together and finish the presentation. So we have a functionality F, which takes the secret chair of the program pie and the input X1 up to Xn. If you remember one of the slides that I used earlier, this X1 up to Xn is supposed to be the same X, which is the input that the receiver of the token want to execute. So this functionality will check whether they are the same and then set it to X and then combine the secret chair to get the program pie and then execute on input X. So we want to use the MPC for this functionality in the Corel Linux model. And then we wrap it using the simultaneously resettable Cionolus in the Corel Linux model. And so the blue box here will be somewhat a resettable MPC, but for a specific functionality here. And then we break it up and then put it into tokens. So we have another protocol, which is a token wrapper, to wrap around each parties in this MPC. This token wrapper will take care of the state of the MPC and then deliver the message between tokens. So finally, we have the token protocol in the Corruptible Token model. And then the final step is we want to reduce the size of this token to have the small size of the program description and only take short input. This is also an interesting part by itself. So as a conclusion, we have a redefined Corrupted Token model, which allows the adversary to corrupt would-be-temper-proof token at the time of the creation. And this better model the untrusted manufacturer in the real world. And what we achieve is the protocol that you see relies the Temperproof Token functionality with a board in the Corrupted Token model, assume only one wave function. And we can combine this protocol with any MPC in the Temperproof Token model to give the MPC in the Corrupted Token model. And that's the end of my talk. And you can see the paper on your print. Any questions? OK, let's thank the speaker again.