 OK, so I'm starting. So thank you for the introduction. And let's start with an example voting. So you have n parties, p1, p8, who want to vote electronically. Each party pi has a vote xi. So think of xi as a bit, for example, 0, 4, no, 1, 4, yes. And each party want to send a single message to an electronic ballot box or to a server called an evaluator, which should then be able to compute the result of the vote, namely the majority of all the votes. So I won't elaborate on this a lot. But in that setting, you need a trusted setup. And suppose we have a trusted party which will generate correlated randomness for all the parties. So more precisely, the trusted party will give to each party pi two messages, one called mi0, one mi1, corresponding to the two possible input bit of the party. And then the protocol will just consist for each party pi to send the message corresponding to its input xi, so the message mixi. And from all these messages, the evaluator should be able to compute the majority of the bits. So this kind of protocol is modeled by a private simultaneous messages protocol, or PSM for short. And this has been introduced by Feige, Kilian, and Naor, and H.I. Koushilebit. So obviously, in that case, in that general model, you consider, instead of considering voting as a function, you consider a general function f. And you suppose that the evaluator is honest but curious. So the security you want to achieve is that the evaluator learns nothing except what is supposed to output, namely the evaluation of the function on x1, xm. So this is an interesting model already, but there are some, I mean, it's kind of weak because it does not provide any guarantee if one of the party, P.I., is colluding with the evaluator and, for example, is giving its correlated randomness, or it boasts its message mi0 and mi1. So let's suppose, for example, that P1 colludes with the evaluator. In that case, clearly, P1 on the evaluator can evaluate the function on both f0, x2, xn, and f1, x2, xn. This is due to the fact that the protocol is completely non-interactive. But what Bemal et al introduced in 2014 is the notion of robust non-interactive multi-party computation, which says that in such a setting, the colluding parties and the evaluator should not learn more than these values that they can learn just by naively applying the protocol. And by the way, these values are called the residual function for the colluding parties. So more formally, what we want is the following security property. We want T robustness. T robustness says that the protocol is secure even if at most T parties are colluding with the evaluator. And the T party and the evaluator should not learn more than the residual function. So if you look at the notion of zero robustness, actually it just corresponds to the original notion I talked to you about, PSM. Because in zero robustness, you do, you suppose that there is no collusion at all, just the evaluator is unexpected curious. So to understand better what's going on, let's start with to show how to construct a simple PSM protocol using garbage circuits. And then we show that unfortunately this protocol as most PSM protocol are actually not secure against even a single collusion. So let's look at this construction. So you have a circuit and the trusted parties, the first thing it will do, it will associate to each wire of the circuit two values called labels. And it will give two party PI the two values corresponding to its two wires. So each value, I mean, one value corresponds to the bit zero on the wire and the other value corresponds to the bit one on the wire. And the protocol will work as follows. Each party will send to the evaluator the value of the label corresponding to its input. So here of P1 send M11. And then the evaluator will take all these values and evaluate the circuit in an encrypted way using some additional information generated by the trusted parties which consists of ciphertext and which are called usually garbled gates. Okay. So concretely for this gate blue gate here, the evaluator will have access to these four ciphertext and using the two labels he got from party P1 and P2, he will be able to recover, to decrypt, I mean, this row of the table, this row of the table and to recover the label for corresponding to the output of the gate. So here it's label MA1, one because one and one is equal to one. This is an end gate if you don't know. And then the evaluator can do the same here and then you can do the same for this XOR gate. And at the end he gets the label of the output wire and he has some again additional information which associates to the label of the output wire, the actual output of the circuit. So this way, the evaluator learns nothing but the output of the circuit. But it's only assuming there is no collusion, it's only a PSN. As soon as you, one party is allowed to collude with the evaluator, nothing's work anymore. So let's look at this example again and let's suppose that P2 is colluding with evaluator. So in that case, you can see that normally the evaluator together with P2 should not be able to learn more than X1, XOR, X3. Because when X2 equals zero, then you get zero, always the zero output and when X2 equals one, you get X1, XOR, X3. So that's the only thing the evaluator should learn. Unfortunately, he can learn much more because what he can do is use the input label for X2 equals one for the left end gate and use the input label for X2 equals zero for the right end gate. And that way, if you evaluate the circuit now honestly as before, what he will end up with is X1, which is something he should not learn at all. So let's step back a little bit and actually look at what is known about this PSM and this NIMPC protocol. So in the PSM setting as we have just seen, if you are willing to assume one function, you can do this garbled circuit trick and get a construction for any polynomial for a circuit. If you don't want to assume any assumption, if you want to be information theoretic, we know how to do PSM for a branching program. So basically lock space and for some strain languages like quadratic rigidity and otherwise it's a completely open question. We won't be able to do anything here in this paper. On the other end, for NIMPC as you may guess, you have way fewer construction. So first, if you want to allow a collusion of polynomial size, the only construction we know is a construction for iterative group products, which is like a kind of weird language, maybe. And the worst of it is that Godfrey's Rital and Bermuda showed really strong impossibility result. Namely, if you can do an NIMPC for robust against a polynomial size collusion, then if you can do it for any circuit or for the universal circuit in particular, then you can get IO. And if you can just do it for CNF, which is a very, very, very weak class of function, you already get witness encryption. So there is not much hope here to construct NIMPC secure against polynomial size collusion under weak assumptions and trials when we function our public encryption. The worst of it is that even for NIMPC robust against constant size collusion, we don't know much. Okay, we know this iterative group product and the only other construction we know is for a billion programs, which are basically a generalization of symmetric functions. So there is a huge gap between constant error robust and NIMPC, which can do basically symmetric function and PSM, which can do lock space information theoretically and any polynomial size circuit computationally. And in this paper, we bridge the gap between these two things. We show that take any PSM or NIMPC secure against no collusion, then you can transform it into an NIMPC, which is secure against constant size collusion for the same function. And the transformation is information theoretic. So we don't need any assumption. So if you started from a PSM, which is information theoretic, you end up with an NIMPC, which is information theoretic. And in particular, we saw basically information theoretic, constant error bus NAMPC for lock space and computationally secure constant error bus NAMPC for P using just by combining this transformation with the previous construction. So we also have a side result about symmetric function, but we won't have time to talk about that. Okay, so let's now look at this transformation. So recall, you want to transform a PSM for some function and to make it secure against a constant size collusion. So in this talk, I will really try to simplify as much as possible the construction. So I will only consider PSM constructed from Gabel circuit. So the kind of PSM you've seen at the beginning. And I will only consider collusion of size one. You just want to prevent one, I mean to make it secure against one user, one player colluding with the evaluator. But as we have seen, even in that case, we don't know much. I mean, we know that the issue is that it's not secure because if the evaluator learns two labels for the same input wire, then it can use these two labels to learn too much information. So what we want is that we want to prevent the evaluator to learn two labels for the same Gabel circuit and the same input. So here is the original PSM and here is our idea. Instead of using only one copy of the Gabel circuit, you will use two copies of the Gabel circuit. So the Gabel circuit is here. One named Sigma equal zero, one index Sigma equal one. And what we want is that if P1, for example, is colluding with evaluator, when P1 and the evaluator try to do things, okay, try to evaluate things using X1 equal zero, the only thing they can do is use the first Gabel circuit or also, sorry, one of the two Gabel circuit, like, but not both. And we want that when they try to do things or evaluate the circuit with X1 equal one, the only thing they will be able to evaluate is the other Gabel circuit. So that they will never know both input label for the same input and the same Gabel circuit. It will be a different Gabel circuit for each value of X1. But the issue is how can you select the Gabel circuit which you will be able to evaluate? Because if you know that P1 is colluding with evaluators, that's easy. You say that the only circuit they are able to evaluate is, for example, the circuit of index X1. And that satisfies the property. But if you don't know in advance who is colluding with the evaluator, you need to dynamically select the Gabel circuit. And in case of one collusion, there is a nice way of doing it, which is just to take the parity of all the inputs. So the circuit that the evaluator and the colluding parties will be able to evaluate is the circuit corresponding to the parity of all the inputs. But here's a small issue, is that how do the evaluator, I mean, learn the input label for the correct circuit? We remarked that, obviously, you cannot give all the input label for both circuit to all parties, otherwise we are just stuck at, I mean, we go back to the beginning and it even wasn't before. You just double the size of everything and you don't have much more security. So what we do is we combine these two Gabel circuits with N new robust, one robust NIMPC, which will be used to output the input labels that the evaluator and possibly the colluding parties are allowed to see. So more precisely, so this NIMPC are called selector and each party, I mean, the trusted party will generate the colluding randomness for all these things and will give to P1, for example, the colluding randomness corresponding to the input one in all the selectors. And then when running the protocol, P1 will just send all the message corresponding to all the selector for its input bit. And then the evaluator will evaluate the selectors, get the input labels for all the inputs of one of the two Gabel circuits and then evaluate the corresponding Gabel circuit and get the final output. The subtle point is that P1 cannot, I mean, P1, if P1 collude with the evaluator, if you try to try to cheat the selectors, for example, if you try to use X1 equals zero for the first sector, X1 equals two for the other sector, it won't be useful at all because in that case, it will learn one input label for one Gabel circuit and another input label for another Gabel circuit. So it's completely useless for him. So the only thing that P1 and the evaluator can do is to evaluate everything honestly using either X1 equals zero in which case they get the whole set of input label for Gabel circuit zero or evaluate everything using X1 equals one in which case they get the whole set of inputs labeled for the other circuit and basically they will only learn F0 X2 X1 and F1 X2 X1 which is a residual function and you cannot prevent them to learn but that's the only thing they should learn. So that's perfect. Okay, so it remains to construct these selectors. So I said that we need one robust NIMPC for this sector. So you may think we kind of went nowhere because we already wanted to construct one robust NIMPC at the beginning. But the point is the selector really simple functions. They only output a message like using some, depending on some linear relations that's something which is really, really simple. And actually we show in our paper how to construct this information theoretically using span program and linear algebra. In this talk I won't have time to talk about this construction. It's a bit technical. I will show you simplified your base construction which require one-way function but keep in mind that we don't need a total one-way function in our real transformation. So here is the selector. So how do we construct a selector for this one robust case and let's say sector for the first input label, cell one. So we will first construct this circuit. This circuit output two things. It output a pair X1 and sigma which is the parity of all the inputs. It uses this weird gate but don't worry. And then what you will do is the following. More precisely what the trusted party will do is the following. It will garble this circuit and instead of giving classically the association of each output label with the output of the circuit, it won't do that at all. Instead it will publish an encryption of the input label that the evaluator should learn depending on the output wire. So for example if X1 equals zero and sigma equals zero the evaluator will learn this output label and will learn input label corresponding to X1 equals zero for the circuit sigma equals zero. And so on. So this way the evaluator will learn exactly the requested label. So if you follow the talk you may see that I'm cheating completely here because I'm using your garbled circuit and I've just said that your garbled circuit is not secure against collusion. But the point is this garbled circuit is really, really simple again. So first each input is only used once and second all the gates are linear. And in that specific case you are secure actually. Okay so we have seen the construction for one robustness security against collision of single party. Let's show how to extend it to constant size robustness. So here you will need more than two garbled circuit. You will need more than that and you will need a smarter way to select which garbled circuit will be evaluated. So previously you just took the party. Now you will use a code. Actually the syndrome of a code. And if the code has some minimum distance, reasonable minimum distance, actually at least t plus one, then you're good. And if you use a retirement code we can show that the resulting T robust and NPC protocol will have complexity basically two N to the t times the complexity of the original PSM. So that's where the constant thing come into play because here you have exponential in t so you need t to be constant otherwise you have exponential size. And we have a bad news is that we cannot extend our technique to do better than constant size collusion. We have this lower bound which is due to sphere packing and it's hard to avoid. Okay, so to conclude our main result is a transformation of any PSM which is a weak model for which we know a lot of construction into a constant T robust and NPC for which we knew basically almost nothing. I mean it basically accept symmetric function in the T group product. And we also have a side result about symmetric function but still we don't manage to go beyond constant size collusion. So this is really just constant size collusion. Okay, thank you for your attention.