 Hi, I'm Karsten Baum and I'm going to present our recent work on efficient constant-round MPC with identifiable abort and public verifiability. This is joint work with Immanuel Arsini from Calgu Leuven and Peter Scholl and Eduard Osoia Vasquez from Alhus University. This paper has been accepted at crypto 2020. Let me begin by giving you a motivation and high-level statement of the problems that we're going to solve in our work. Consider the following example where we have four parties P1 to P4 and the parties want to bid for a certain good. Party one in this example would bid three coins, well party two would bid four coins and so on and so forth and these parties would in an ideal setting send their bids in the sealed envelope to an auctioneer and this auctioneer would compute a function on these bids by looking into the envelopes and checking how much each party bids and it would identify in this case party two was having the highest bid with four coins and it would then announce to all parties that party two has won this bid and that party two bid four coins. In practice we want to eliminate this so-called trusted third party which is the auctioneer and replace the trusted third party by a secure multi-party computation protocol which simulates this TTP. Here the NPC protocol would simulate the trusted third party as follows. Due to the correctness guarantee of the NPC protocol we would know that the simulated trusted third party would output the exact same value as the actual trusted third party and by the privacy of the NPC protocol it would be guaranteed that only the highest bid would be announced in the end of the computation and that all other lower bids would remain invisible to a potential adversary. The problem in this setting is that fairness in this setting is not guaranteed. Fairness is a property where we require that if the adversary gets the output of a secure computation then also the honest parties will get this output but due to an impossibility resolved by Clee from 1986 we know that if we're in a dishonest majority setting we can't actually guarantee fairness and in this setting as we are here we would have no honest majority. Fairness is an important property in the setting because consider the following situation as the protocol is not fair we would run this auction and the adversary would learn that Alice gave the highest bid and this was for coins and afterwards the protocol aborts. Now that means that since the honest parties don't know what happened the best they can do is rerun this auction but now the adversary has an advantage namely knowing that Alice would bid for coins and this means that the adversary would be able to unfairly bias such an auction. An alternative to using a fair protocol is what is called a protocol with identifiable abort. Let us compare unfair to identifiable abort here. So in an unfair protocol we would have that the adversary would always learn the output of the computation and either it allows the honest parties to also get this or it denies that the honest parties get the output. Now in a protocol with identifiable abort we would still be in the same situation that the adversary always learns the output of the computation but now either it lets the honest parties get the output as well or the honest parties at least learn the identity of one of the corrupted parties and we know that this is actually possible to achieve which existing with existing cryptography namely a tweak of the original GMW protocol already achieves this in particular using a compiler from Ishae Adol from 2014. Our contributions are as follows. We present an actively secure multi-party computation protocol that has this identifiable abort property and runs in a constant number of rounds. Our protocol is proven secure against any static adversary that controls a dishonest majority of parties but not all parties at the same time. Our protocol has a reasonable overhead over the best existing constant round NPC protocols that do not have identifiable abort and in addition we show a transformation that also yields public verifiability of the output meaning that a third party could look at the transcript of the computation and either verify that the output was computed correctly or it could identify a cheater in the computation and this can be realized using for example a bolting board. Let me mention some work that is related to ours. We're not the first to consider NPC with identifiable abort. In 2014 Ishae Adol presented a GMW style compiler that allows to construct IDMPC from Oblivious Transferred as Adaptively Secure and their compiler can also be applied to existing constant round protocols and then yields a constant round NPC protocol that has identifiable abort. But in work by myself Emanuele and Peter as well as Spiny and Fair as well as Cunningham Fuller and Jakobov there were constructions of NPC with identifiable abort from the so-called speeds protocol. The speeds protocol as such is not constant round so these protocols are not constant round and in 2015 KIS Adol gave a construction that achieves a publicly verifiable NPC with identifiable abort based on NISIX. We'll also show how we compare to this construction and the IOZ-14 a bit later. And both the IOZ-14 and the KZ-Z-15 use zero knowledge in a very crucial way and these are generally rather inefficient protocols that one yields from that so these are more feasibility results. In some recent work with David Dawesley Nielsen and Eksna we additionally show that one can also obtain NPC with output independent abort. This is like NPC with identifiable abort but there the adversary has to decide on whether he wants to reveal the output or not to the honest parties before actually seeing the output whereas in our case the adversary can see the output first and then make a decision. First let me recap how the compiler by Shai Edal actually works. In their construction they take an arbitrary NPC protocol and add a preprocessing phase to it and this preprocessing is then used to make the overall construction secure with identifiable abort. This preprocessing works as follows. First this preprocessing is independent of the actual inputs that the parties are going to provide to the NPC protocol so the observation of Shai Edal is that this protocol can fully be revealed in case there are any errors. So this preprocessing then works as follows. The parties first commit to a random tape then they generate correlated randomness which in this case means that they preprocess zero-knowledge proofs. Then since all the communication runs through a broadcast channel parties always know if another party sends protocol messages or not and then if there is any problem in generating this correlated randomness during the preprocessing then the parties just open these commitments that they made to their random tapes. This doesn't reveal any information about the online phase about the actual inputs because this is just random values and then they compare these transcripts together at the transcript that was made by all parties with the randomness that they committed to and if everything is consistent then they punish the party that complained otherwise they identify the cheater this way. Then in the online phase they run the actual NPC protocol and prove in each step in zero-knowledge that the messages are well formed. This then also allows identifiable abort by simply verifying the zero-knowledge proofs. One natural idea in order to obtain NPC with identifiable abort in a constant number of rounds would be to compile a constant round protocols and these are generally based on the so-called BMR paradigm which uses gobbled circuits. So let me just quickly recap gobbled circuits for passively secure two-party computation. This is due to Yao and here we have two parties a gobbler and an evaluator. That gobbler would take the circuit that is supposed to be computed securely. Let's call it C and put gobbled it and make it unintelligible. This is called Z prime. Then the gobbler would send this circuit to the evaluator and afterwards the parties are going to run an input encoding step where party one outputs its input, party two outputs its own input and the output is an encoding of each party's input which is both then sent to party two the evaluator and the evaluator then uses the evaluation algorithm to compute the output of the actual computation. And here the important point is that this gobbled circuit together with the input encodings only reveals the output and no other information. And then in order to have to also provide Alice with the output the evaluator would then send the output to Alice. In a multi-party setting we could do something similar using the following approach. First we use a generic MPC protocol which performs both the garbling step inside the MPC. This garbling step would then output the gobbled circuit but maybe plus some additional error on top. And then the parties would also use the MPC protocol in order to compute the input encoding step. Now since both the input encoding and the actual garbling are of constant depth this gives a constant ground protocol if implemented with any arbitrary MPC protocol. And the parties can then each locally use this evaluation to evaluate the circuit on the encoded inputs. This can always be done locally and in case there is no error we would get the guarantee from the multi-party gobbled circuits protocol that the correct output would be revealed and otherwise an abort would happen. In our case we obviously want that instead of an abort happening a dishonest party will be revealed in this step. One could obviously try to directly compile a constant ground protocol with IOZ-14 as mentioned before. So let's take as an example the protocol by Hazay Shol and Soria Vasquez which is also the core of our construction and let's compile it with this compiler by Isha Yatal. What actually is a disadvantage of this approach? So the first point is that the protocol by Hazay Yatal already has a pre-processing step which is independent of the actual input so it doesn't make sense to have another round of pre-processing before this which is also input independent. So one would like to merge these two processes together. Also garbling something means that PRFs need to be evaluated and this is not just inefficient if done inside multi-party computation but it would also be inefficient to prove in zero-knowledge that one evaluated the PRF correctly so we would like to avoid this. So if we would directly garble using HSS-17 and then throw the IOZ compiler on top we would obtain a protocol that would need n squared zero-knowledge proofs for each gate that is garbled where n is the number of parties. Whereas in our constructions we will not need any zero-knowledge proofs whatsoever in the order to garble the circuit. Let me now explain to you how we solve this problem based on the protocol by Hazay Yatal. First let me recap how to garble an end gate which is the core of the garbled circuits approach. There instead of having the inputs and outputs of a gate in plain we would replace the input bits of the input wires U and V with bit strings and we would do the same with the outputs. That means the zero on the wire U would be replaced by a bit string, a random bit string which we call K U zero and the same for the one on the U wire and the same for V and W respectively. Then one would encrypt the outputs of respective bit values under the input keys meaning we would encrypt the zero string on W meaning we encrypt the output of the gate being zero under inputs being zero or either of the inputs being zero on the other one being one and we would encrypt the output one under the input keys that corresponds to both inputs being one. Then after this encryption process one would shuffle the table so that during the evaluation one cannot find out which is actually the current truth value when decrypting these encryptions. In the setting where we have more than two parties let's say end parties this gobbling actually works a bit different. In the example here we consider the HSS gobbling. Here instead of a key for each of the zero and one values per wire we would have a vector of keys that would be the input and the outputs respectively meaning that instead of having a let's say 128 bit key we would have a 128 bit key for each of the end parties and these end 128 bit keys would then make up the whole output key of the actual gobbled gate. Each party in this setting actually provides one part of the keys meaning that party one provides the first block of this party two provides a second block and so on and so forth and then each of the encryptions actually consists of an encryption of each of the outputs each of the output blocks under all the different input keys for this respective truth value. Now creating all of these encryptions creating all of this gobbling is actually happening in during a pre-processing phase and it's independent of the actual inputs and during the evaluation what happens is that parties have actual keys with some respective truth values for example they have the one key for you and the one key for V and now they want to obtain the output of the gobbled gate so what the parties do is they then decrypt the output key and use this then as input to for example the next end gate. Now there are two possible output keys that could result from this decryption process the zero key which is the key that corresponds to the truth value zero or the one key and during the evaluation as said the parties decrypt the respective key in this case this is KW1 using the two input keys that they have and then they need to check if this is actually a correct output key so they need to check that nobody cheated during the gobbling process if you remember the gobbling might add some error E to the whole gobbling step because it happens in multi-party so the parties then in order to establish correctness check if the block of the of the key that they provided contains their respective zero or one key so as mentioned before in the gobbling each party actually provides a block of each key party one provides the first one so many bits party two the second so during the evaluation each party simply checks if in the decrypted key that they obtain the party one for example checks if the first 128 bits of the key corresponds to either the zero or the one key that it's provided as part of of this and if not then the parties noted some cheating happened and in the usual HSS evaluations the parties would then abort in order to obtain identifiable abort in this we make the following observation first during the gobbling step the party each party first generates an additive share of the full encryption of the output key in each row under the different input keys so here each party has an additive share of the output key and it has shares of the individual input keys and it then generates from this a share of the encryption and afterwards each party broadcasts this share of the encryption in the evaluation after summing up all the shares what a party does is to decrypt the output key and the respective row by applying the two input keys that it has now we observe that this broadcast step actually reveals the shares already and that this does not hurt the security of the overall protocol namely let's say that the party is detected at a certain gate in the certain row the decryption doesn't work correctly now if the parties would broadcast the U and the V value that they used in order to generate the encryption and this doesn't hurt because during the evaluation each party already holds these as part of their decryption keys that they apply so this does not this doesn't pose any security issues and the same applies to the share of of the output vector that one would obtain because the adversary already knows the correct output that it would get in this setting so actually whenever we obtain whenever we would see an error in the specific row all we have to do is to recompute this garbling step in public in order to be able to do this public reconstruction we have to have the parties commit to their shares and then open these commitments before we do this re-gobbling in public this is because the parties could lie about the shares that they used in order to gobble their respective gates so that means that during the pre-processing we will have that the parties have to commit to the shares that they used in the pre-processing step then during the evaluation we identified the smallest gate where an error occurs then the parties decommit the shares that they used in order to gobble the specific row that was decrypted and then during the re-computation they find out where an error occurred or if an error occurred at all and this is then used in order to identify the cheater for this to work we obviously need commitments from the pre-processing phase to the individual shares so we have to modify the HSS pre-processing as follows first of all similar to the IOZ compiler we will let every party commit to their random tape next we will run the pre-processing protocol of HSS and we will generate commitments to the individual keys using a broadcast channel next we will check the consistency of the values that were obtained and of the committed values and then in case of an inconsistency we can actually open the commitment to the random tapes that were made by every party and similar to the protocol by HSI et al simply compare with all the messages that were sent again since this is the pre-processing step and no inputs no actual inputs of any party were used at this step this will not break the security now this approach might be intuitively secure but when one wants to prove this secure based on a simulation-based proof this is actually not trivial the problem being that the simulator itself obviously does not know the shares of the honest parties so these shares are hidden by the ideal functionality but once we open all the messages that are sent in case of an in case of an abort and these shares will be revealed by the ideal functionality and then the protocol generated is inconsistent so in case of an abort there will be an inconsistency between the simulation and an actual protocol and this is a problem now HSI et al solve this by using an adaptively secure protocol in order to generate the pre-processing this is where the adaptively secure OT comes from in our case we circumvent this problem by simply defining the problem away our solution is to define the pre-processing functionality as follows we say that either the protocol was executed correctly in such a case an adversary may not abort anymore it cannot abort anymore in practice actually so in this case the honest parties obtain the shares from the ideal functionality whereas in case of an abort the shares were never sent to the honest parties and that means that there is no way of comparing with the values that were sampled in the ideal functionality and there is no need to use an adaptively secure protocol now an alternative to this was recently proposed in another work of myself David and Towsley where we define so-called uber simulation which allows to obtain this you see verifiability in a different way to conclude in this talk I introduced our recent construction of efficient constant run NPC with identifiable abort and public verifiability the challenges that we have faced when constructing this protocol were first of all to deal with this identifiable abort and with public verifiability in an efficient way meaning to avoid inefficient techniques such as zero knowledge proofs as much as possible we also eliminated this necessity of having adaptively secure oblivious transfer or that if you seek your pre-processing in order to have this verifiability of the pre-processing phase which is an additional contribution that we see in our work with respect to the work of has I at all our overhead hours follows first of all during the online phase we make more use of broadcast than they do or use of a bulletin board in order to achieve public verifiability but we can actually reduce this overhead by using an optimistic variant of a protocol and during the offline phase we may also have to use more broadcast except we can also reduce this using an optimistic variant of a protocol and we additionally need to use commitments or in our case homophic commitments as mentioned in this talk with this I would like to thank you for your attention and if you have any questions feel free to contact me or any of the other authors of this work thank you