 Hi everyone, I'm Bingxian Zhang from Zhejiang University. Today I will talk about cloud verifiable data knowledge and end-to-end verifiable multi-party computation. This is a joint work with Fortini Fultimati, Oculus Kias, and Thomas Zacharias. This talk will be mainly divided into four parts. First, I will give some motivation and background, and then I will introduce our main results. And then I will talk about CVVK, which is stand-for-cloud verifiable data knowledge. At the end, I will talk about the VNPC, which in this talk is stand-for-end-to-end verifiable NPC. Motivation. So conventional NPC, in a conventional NPC, there are some players, for example, P1, P2, and P3. They want to jointly computing a function F over their inputs X1, X2, X3. And we have some security guarantees. For instance, when some of the players are corrupted, we may still want the input privacy. For instance, P1's input is unknown to the others. Or output correctness, the FX1, X2, and X3 is correct. Or input independence or fairness or guaranteed output delivery, et cetera. So many of these security properties may be captured by an ideal functionality in the UC framework. But here I want to emphasize some of the players can be corrupted, not all. Because in a UC framework, if all the players are corrupted, then the simulation is a tribute. There's no honest players to simulate. Now, however, in real life, the NPC is more and more used in the so-called client and server mode, where the clients are the input nodes and output nodes. The NPC players are the computing nodes, which means the NPC players do not hold the input. And the input is given by the clients, those who do not participate in the computation. Now, in this case, what if all the servers or the computing servers are malicious or the NPC players are malicious? Of course, we have no privacy because all of them are malicious. But how about correctness? And this is referred to the notion about auditability or verifiability. Basically means the clients or any third-party auditors, third-party observers, can verify the NPC result. Now, in a publicly verifiable NPC, we typically introduce another entity called the Bulletin Board. Bulletin Board is a trusted entity where everybody can put the data on it and nobody can erase data from it and everybody can read it. In reality, it may be realized by a server or blockchain or multiple servers or broadcast channel, et cetera. And in this talk, we will not discuss the realization of the BB. We just assume there exists one. And there are several external observers by looking at the transcript produced by the NPC players to the BB can output a decision, yes or no. The result is correct or not. Of course, this notion is not exactly very precise because those observers, those auditors are not a part of the input players. For example, it does not know the input. And if the NPC players switch the inputs, then they won't be able to understand. But I will capture them later. Okay, previous work has thought about this setting. For example, the famous publicly auditable NPC work is proposed by BDO 14 in the SCN 14. They modify the speech protocol by attaching a commitment to each shared value. And the secret sharing is additively homomorphic. The commitment is also additively homomorphic and as they over the same field, the plain text domain is the same, the plain text space is the same. So they share the same homomorphism and the commitments are posted on the BB. And intuition as follows. When the NPC player perform a homomorphic operation on the shared value, the auditors can also play the same operation on the commitments because they share the same homomorphism. And when the NPC player open a shared value and they also open the corresponding commitments to the BB so that everybody can perform the same NPC function over the commitments on the BB. And therefore the result is verifiable. And the exact security guarantees BDO have is when at least one of the server is honest, we have privacy and we have correctness. When all the servers are corrupted, we still have privacy. And the assumption is as follows. In BDO, it need a commitment key and which is realized by CIS. Of course, it can also be achieved by a random oracle because it's a pattern commitment. And they need in the paper, they need the random oracle for the NISC proof for the non-interactive zero knowledge proofs. There are some other work on this topic. For example, universally verifiable NPC by SV15, they use the thread called homomorphic crypto system and they're also in the random oracle model. And in this work, we propose a new concept and to end verifiable multi-party computation. It extends the conventional VMPC concept. We basically address the client by them in behavior. We separate the users from their clients. Now, when you look at this picture, the inputs are given by the users. You want to use five to their clients, C1 to C5, and then they will submit to the client, to the NPC players P1, P2, P3. And this is very close to the notion of ceremony. Ceremony is an expansion of protocol where we also model the human behaviors in the picture. Okay, this separation allow us to have a more refined security measurement. For example, what if the client C1 is malicious, but the user U1 is honest. In conventional setting, when you treat C1 and U1 together, you cannot have such kind of security refinements. In our model, the servers and the client devices are stateful probabilistic interactive tooling machine. And the human users have limited computational power and a limited intro. For limited entropy, we assume the randomness a user provides can be adversely guess with non-negligible probability, which is fair. And the humans computational power, we assume it's linear in the security parameter or minimum for it is required to read the inputs. And that's all. It cannot perform crypto operation. Our result, we construct the first audit or NPC protocols in the standard model, in the sense that everyone can access the transcript and verify the output is correct, even if all the servers and the clients are subverted by the adversary. Of course, in more detail, we actually prove it in UC, in the extended UC, EUC and in the H helper EUC model. And we also studied the feasibility and infeasibility result for such kind of NPC, what function it can compute, what function makes sense. For example, E voting, privacy preserving machine learning, privacy preserving statistics, et cetera. And in general, a VNPC protocol consists of four phases. In the first phase, it's called initialization or pre-processing, it's run by the servers. The NPC servers are going to prepare something and commit the beaver triple and parameters to the beaver. And in the second phase, input phase, it's run by the servers, the user, and the client. The user will provide their input to their input devices, their client. And the client will interact with the servers and the user will receive the audit data afterwards. And the third phase is the computing phase. It's executed purely by the servers. The server will compute the result Y and post them to the BB together with some publicly audited data. And the fourth phase is the verify. Verify is run by the verifier and the user. Basically, the user gives the audit data from the BB to the audit will use the audit data from BB and the user's some audit data. Note that the human and the client channel, we assume there exists a secure channel. Basically, the adversary cannot even stop show the sniffing, the user's input. And also from the user to the auditor or the verifier, we assume there's authenticated channel. Let's recap speeds. In a speech, without considering linear MAC, it roughly looks like that. It's an additively homomorphic share. You split them into X1, X2, X1, X1, X2 until X1. And each one of the server will get one share. In the offline phase, they will jointly compute the BB triple or ABC, where C equal to A times B. In the online phase, if you want to perform addition, you just add them together because it's additively homomorphic. If you want to perform multiplication, you need to consume one set of the BB triples. And the first you open E equal to X1 minus A and D equal to X2 minus B. And then you compute the X1 and X2 share homomorphic. Okay, now CVVK. The main challenge to have a verifiable MPC in the standard model is we cannot use the regular ZK. We cannot use the regular ZK in the offline phase in either speech or video. We want to have some new notion because in our setting, the servers and the clients are malicious, only human ones. In the picture, they're only humans. The verifiers are not participating the protocol at all. It's just the observer. They can be removed from the picture when we want to do the BNPC. They just there for the verify the result, okay? So we need to come up with some randomness from the human, okay? Now, the cloud verifiable ZK roughly works as follows. In the first round, the prover give A to the verifiers. And the verifiers, each one of them will independently produce some C. Of course, each one of the C have a very little interval. For example, one bit, okay? And then the prover will give the third move C, okay? And then we assume there's a verification algorithm, publicly verification algorithm, and you apply them, then you get a yes or no. Completeness, soundness, zero knowledge are parameterized with the number of verifiers can be corrupted. And the construction challenge is the randomness is produced by human or cloud. And that has not, each one of them does not have enough entropy to challenge the protocol. How can we glue them together or combine them together in certain way to produce a sound protocol, okay? Joint randomness should be statistically close to uniform random stream. This is what we want to achieve. And we achieve that by a new primitive called the coalescence function. Someone may say, okay, we can use a one round collective coin flipping protocol. Basically, when each one of the verify gives CI, we apply a function on the CI, then we got a uniform stream. We extract some bits and we use that to challenge the proof. Unfortunately, it's not possible. There are some impossibility results. For one round collective coin flipping protocol, you cannot achieve that many entropy, okay? It doesn't work. Then we want to relax the setting. So we propose a coalescence function. A coalescence function will produce a collection of streams and one of them is sufficiently close to uniform random, which means one of them have enough entropy to challenge the CVDK. Roughly as follows, F takes input in one to in 10. With output M streams, each one of the streams have k bits. And we can make sure that with overwhelming probability, at least one of the streams is statistically close to random. The others, we don't care, okay? Now how we can do that? Let's assume for now, let's assume human or verify supplies one bit. Each one of them supplies one bit. So now the CI's are just one bit. We take the CI's, we partition them into lambda, log lambda blocks, where lambda is security parameter. And for each lambda block, we group them together. For each one of them, for each one of the block, we apply the one round collective coin flipping function to extract one bit. So basically for each one of them, we extract one bit. It looks like a non-oblivious bit-fixing source. We extract one bit, okay? So each group, we have lambda bits, okay? What if the function F is bias? Then we perform von Neumann rejection sampling. We can take the cost of losing some bits, but we can produce purely balanced bits, okay? Now, the CVZK in general, it works as follows. It will ask the prover to prove a disjunction of the following two statements. One is either he knows our weakness for the X belongs to L, or he can invert a hard instance by some one-way function, okay? This we need to use adaptive one-way function and we need additional property called public sample goal. Public sample goal means everybody can sample an image from the image space using some public coin, okay? Adaptive one-way function is defined as follows. It's a F tag, okay? And we want it's hard for the PBT adversary to invert F tag with a randomly sample image, even if it can access some oracle with the inverse, invertible oracle to inverse the F tag prime where the tag prime is not equal to tag, which means that given the F tag prime inversion oracle does not help the adversary to invert F tag, okay? And the public sample ability, I just mentioned it before, so basically it means that it exists a public function called IAM where you can sample the image, okay? So it works as a forms. So first the prover will construct, we use normal sigma protocol to produce lambda log lambda copies of it. So it will produce log lambda A, okay? Log lambda AI copies. And the mean way to produce the, speak some randomness up, just to pure some randomness will be used later, okay? Then it sends AIs to Z. Verify, then the verifies will give a C. And we apply the coalescence function on C to get the D1 to D log lambda. Note that each one of them have a lambda divides log lambda square less. And then we apply E equal to R equal to C, and we press E into lambda, log lambda blocks. Let's call them BI. And then for each one of the EI, we use it as a challenge to complete the sigma protocol. Basically it's written as sigma pi witness W, AI, E, and to get a Z. And then we send the ZI to the verifier. And then the verifier can verify all log lambda parallel execution are all correct. And then it accepts, okay? Now the second part, what for example we need to, if we want the simulator to work, usually the simulator will invert the one-way function, right? Now the first question is where does omega come from? Answer is the omega coming from the verifiers. However, we want to give the zero-knowledge proof either or zero-knowledge proofs in advance before seeing the verifiers challenge, okay? So we take the verifiers challenge, we use them as omega, but it comes from the second row. And in the first row, we need to give some first move, okay? So actually we need input-delayed protocols, what we call IND pi. Input-delayed protocol basically says you can allow the prover to first give the first move of the zero-knowledge proof and then learn the statement you want to prove and then complete the proof. So the statement does not need to be fixed in front. It can be fixed before the final move, okay? Now the prover will run N log lambda parallel execution on the simulation of the IND protocol, okay? Then it will produce N log lambda H star and we send the H star together with the AIs, okay? Why simulation? Because if you are an honest prover, you will never use the blue part. You will never invert the one-way function. You will only simulate that, okay? Then when you get a C, we will apply the coalescence function on C and then we will get a DI and then for each DI, we use them to sample an image, a challenge XJR star and we simulate that XJ star is a statement and then we get Z and then we apply Z1 or ZI to ZJ star to the verifies. The verifies now will verify both, okay? So first the verifies, this is a complete picture. The verifier will check R equal to C equals all E and then we'll verify all the one-way function, zero logic proofs, and we'll verify all the X belongs to L zero logic proofs. Due to the time limitation, I cannot go through the details. If you want to know more detail, please refer to our paper, okay? For security, we can tolerate up to N power of one minus lambda over one over lambda over log N cubic, verifies to be corrupted, okay? Now the last part, VMPC. Once we have the CVZK, we can construct the VMPC quickly. For times limitation, I only give you an example for one cell. Of course, one server, when the server is corrupted or we have no privacy, okay? Roughly on the follows. At the beginning initialization phase, S will propose the commitment key to the BB and approve the commitment key is binding, okay? Using CVZK and S will generate two random values, R zero and R one, and also send them to the client and commit them to the BB. And S will generate the beaver triples and commit them to the BB and approve their correctness using CVZK. Input, okay? The client will display R zero and R one to the users. Users will flip a coin, pick one of them to use and the other one to check. For example, it picks B and it will mask XI using RB and that's called the DI and it will give it to the client and the client will post it to the BB. During the computation phase, each S will open the unused randomness part, okay? For example, if we use R zero then R one is open. S will evaluate the functions on the committed value, commitments of the users input and get Y and then we can verify by the verify. So the audit data received from the human verifiers, our human users adjusted the unused random stream and the bit it picks, okay? And the rest of the transport are posted on the BB and the auditor will verify the commitments opening are correct and verify the CVZK proves and also reconfute the circuit, okay? Note that the malicious client have high probability to alter the user's input because if it gets the user's guess, then he can modify R, okay? But this is the best we can do. This is known as the Bannerer challenge because we have only one bit, we can only tolerate this. That's why we need to study the function that this kind of NPC players can compute which makes sense. For example, EVO makes sense. Take away. So first, we've proven the VNPC and and verifiable NPC is secure in the H extended UC helper model. It's a helper model and we use the adaptive one-way function. We use input delay protocols and commitments. And the second, the concrete efficiency of VNPC protocols largely depends on the efficiency of the input delay zero, zero logic protocols. Actually, it could be very, very efficient. At the end, I would like to thank the grants, okay? Thank you very much. If you have any question, please email me and you can find the full version on this.