 Hi, I'm Fabrice Benavida, and I will be talking about Mr. List, multi-party, reusable, non-interactive, secure computation. This is a joint work with Rachel Lin. Let's start with the model. Let's suppose that you have four hospitals, H1, H2, H3, H4, with confidential databases of medical records, X1, X2, X3, and X1. And let's suppose that you have a rich art institute, which wants to conduct a public study, F, on this record of H1, H2, and H3. So by a public study F, I mean just a function F on the input X1, X2, and X3. Unfortunately, X1, X2, and X3 are confidential. So the hospital cannot just reveal them. Instead, what we would like is that the hospital can compute and publish some messages or computation encoding, alpha 1, alpha 2, alpha 3, such that from this computation encoding, anyone can compute the output of the function F on X1, X2, and X3. Importantly, this computation encoding should not leak anything more than F of X1, X2, X3. In particular, it should not allow anyone to learn any single medical record. We are in the real world. So there are many different research institutes who conduct many different studies with many different subsets of hospitals. And we would want to allow that to allow another study like F prime to also be conducted. And the process should be the same. So for example, this orange study is with the database of H1 and H4. And to be conducted, H1 and H4 can generate the computation encoding alpha prime 1 and alpha prime 4, and this computation encoding reveal F prime of X1 and X4. Remark that the computation encoding depends on the actual study. So the green alpha 1 is different from the orange alpha prime 1. In addition to that, we would like to allow a hospital to join the system at any point. For example, the hospital H5 can join, and then another research institution can conduct yet another study between for example H5 and H4. And that's the same process. But let's first focus on a single study, the green study with the hospital H1, H1, H4. And let's ask ourselves, can we even achieve this ideal goal? Unfortunately, in 94, Fajon-Kilian, and in 97, Isha and Kushelevit show that we cannot achieve this goal. That in any such non-interactive protocol, the residual function will be leaked. What do we mean by that? We mean that for example, suppose that hospital 1 is correct. What it can do is it can participate to the study normally, but then in its head, it can consider another input, another database X prime 1, derive the corresponding computation encoding alpha prime 1, and use the public computation encoding of hospital H2 and hospital H3. So the public computation encoding alpha 2 and alpha 3 to derive the output of the function F on X prime 1, X2 and X3. In other words, a corrupt hospital can compute F on X prime 1, X2, X3 for any X prime 1. This is completely insecure. And the issue intuitively is that the inputs of the hospitals or the parties are not committed. So to bypass this lower bound, something must give. And what we propose in our paper is to allow hospitals to make commitments. So concretely, H hospital HI will create a commitment X hat I and keep the associated secret state SI, secret state of randomness. Then they will publish their commitment. And now the big difference with B4 is that the computation encoding alpha I can depend on these commitments. So for example here, alpha 2 is a function of the study F, the secret state S2 of hospital H2. And most importantly, of the three commitments, X hat 1, X hat 2 and X hat 3. So now what happens is H1 is correct and try to use another database X prime 1 in its head. Then it will not be able to compute the output of the function F on this X prime 1, X2, X3 using alpha 2 and alpha 3 because alpha 2 is linked to the commitment X hat 1. Which commits to X1 and not X prime 1. So alpha 2 and alpha 3 cannot be used with X prime 1 and hence the previous attack can be avoided. But we still want to keep the flexibility we had before namely that any hospital can join at any time by just making a commitment, keeping the secret state, publishing the commitment. And now a new study can be conducted between for example, hospital one and this newcomer hospital four. Importantly what we want is that the computation and coding alpha prime one and alpha prime four only depends on the commitments of the parties involved in the study. So here X hat 1 and X hat 4. And they should not depend on the commitment X hat 2 and X hat 3 of the other party, hospital two and hospital three. Let's try to be a little bit more formal and define precisely what an MRNISC is. So an MRNISC is defined by three algorithms. The first one is input and coding or commitment. Come, it takes as input the input of the party, the database of medical record in our example before. X hat and output the commitment X hat I and the secret set as I. Then once the commitment of some parties are known, anybody can choose a subset S of these parties and ask them to compute a function F on their inputs. And to do that, the party will need to compute this computation and coding alpha I using this algorithm and code which take as input the function to be computed. The commitments of all the parties involved in the computation, but no other parties and the secret states of the party will consider. From all the public computation and coding alpha I, you can then evaluate the output of function using the further algorithm evaluated. This is completely public, there is no secret to perform this evaluation. Importantly, you should be able to perform this computation on any set of your choice and any function of your choice. You can use the same function on many different sets of parties or you can use many different functions on the same set of parties, everything is allowed. And we want to achieve correctness even with dynamic party joining. And despite all of this, we still want to achieve a strong notion of security namely simulation security. And in details, we consider semi-unless adversary but we actually can consider slightly strong adversary. Static corruption and dizziness majority which means that even if all but one parties are corrupt then we still ensure security for the non-corrupt party. Another way to look at our MRNISC is to look at them as a two-round MPC with two extra properties. Okay, look at it this way. In round one, the parties would just broadcast their commitments, XI, I. And in round two, on input some function F, the party would broadcast the computation encodings alpha I and then they can compute the output of the function F on their inputs from all these computation encodings. Importantly, we have these two extra properties. The first one is reusable first round. It should be possible to compute many different functions on the inputs by just doing again the second round. The first round being completely fixed. In addition to that, we want the set of parties involved in the computation to be dynamic. For example, you may want to compute a function F on the inputs of the party one, two and five and this computation should only require parties one, two and five to broadcast the computation encoding and the other party should not need to do anything. And then you may want to do a computational function F prime on the inputs of the party three and four and then only the parties three and four need to participate in this computation, need to send their own two message for this computation. This is a dynamic set of party proper. So this relation with two round MPC leads us to think that maybe we can just use existing two round MPC protocol and tweak them a little to get a more nice. So let's look at existing two round MPC. So there are essentially five main lines of work and I apologize if I'm missing some citation here, but essentially here are the lines of work. The first one is the one based on multi key FHC by Asharoff et al in 2012. So this line of work is MPC, two round MPC from LWE, which is a very standard assumption. The issue is that it requires a setup. And in our mRNAs, we would like to avoid setup to really allow as much flexibility as possible. And the second caveat is that we don't know to make it reusable and to have a dynamic set of parties because the notion of security that multi key FHC achieve looks a little too weak. Second line of work, homomorphic secret sharing. Homomorphic secret sharing is an amazing primitive introduced by Boyle Gilbois Isha in 2016. And it has many applications, in particular it allows to construct two round MPC. Unfortunately, it also requires a setup in that setting. Furthermore, it seems very hard to achieve a dynamic set of parties with homomorphic secret sharing because as the name indicates, some secret is shared between the parties. And if a new party is joining the system, somehow this secret need to be reshared with this new party. And most likely we will need interaction for that purpose. The good thing is that homomorphic secret sharing based two round MPC is based on very standard assumption to give you. The next line of work is obfuscation based two round MPC. Introduced by Garth Gentry, Alevi and Reikova in 2014. The advantage of this construction is that it requires no setup. And with little tweaks, it can be made reusable and support a dynamic set of parties. The big drawback is that it uses IO indistinguishable obfuscation, which is a very complex primitive. So in 2015, Gordon, you and she showed that you can replace IO by witness encryption, which is a seemingly weaker primitive. And you still keep the reusability and dynamic set of parties properties. Unfortunately, witness encryption is still a very, very complex primitive. So 2017 and 2018, Garth Gentry-Nivasan and Rachel and I showed that we can instantiate the GLS-15 construction using much weaker assumptions. Namely, Delin for GS-17 and two round OT for GS-18 and BL-18. And two round OT for GS-18 and BL-18. Two round OT being the minimal assumption. The drawback is that we lost the reusability and dynamic set of parties properties. So in our result, we showed that to recover these two properties by using just a slightly stronger assumption than two round OT, a 6-bit. Let's compare our work with concurrent works. Namely, work from Anna, Jane and Jean, and a work from Bartosek, Garth, Mastny and Mukherjee. Both are reusable to run MPC without any setup and based on standard assumption, LWE and DDH. Actually, interestingly, they are based on the two first lines of work as described before. The caveat is that they don't support dynamic set of parties. So in this work, we are achieving construction with dynamic set of parties at the expense of using a slightly stronger assumption, S6DH, stronger at least compared to DDH. Okay, here are our contribution. First contribution, definition of Marnisk. Second contribution, construction of Marnisk from S6DH, and as a side result, construction of weakness encryption for a new family of language, music of commitments. Finally, two applications. Secret sharing DBB obfuscation. DBB obfuscation cannot be achieved, but secret shared version can. And reusable setup for non-interactive MPC. Non-interactive MPC or NIMPC is a bit of the cuisine of Marnisk, but without commitments, and so with a much weaker security property, namely leakage of the residual function. Let's now show how we achieve our construction. And let's start with the paper of Gare-Gendry Alevi and Reykjavar from 2014. In this paper, they construct two round MPC by starting with a many round MPC, L round MPC, and by compressing all the round using IO obfuscation. So here is how the construction works. In the first round, parties commit to their inputs. That's exactly what happened in our Marnisk. In the second round, parties broadcast obfuscation of a program computing the next message in the L round MPC. Why does this work? The idea is that after receiving all the obfuscated program of all the parties, you can run the L round MPC in your head. And that gets all the message of the L round MPC and then get the output of the L round MPC, which is the output you want to get. There is a little tweak that this obfuscated program need to verify that the messages they receive are valid with regard to the commitments to avoid the leakage of the residual functional type. For that, they use non-interactive their own output, music proof, that they verify. IO is a very expensive primitive, as we have said before. And so in 2015, Gordon, you and she showed that you can replace IO by garbled circuits plus a seemingly weaker primitive witness encryption. So the idea is the following. Instead of obfuscating this orange box, we are garbling a slightly simpler green box. It's actually same as the orange box except without the verification of the music proof. And if you know garbled circuit, you know that to evaluate a garbled circuit, what you need is the labels or keys corresponding to the wires of the inputs. How can parties recover these labels or keys? So what they do in GLS-15 is that they witness encrypt this label so that people can decrypt the label they need using the music proof, proving that the message they got before are the valid messages. So more precisely like what witness encryption gives you is that it allows you to encrypt, for example, labels related to some statement. And if this statement is true and you know a witness for this statement, then you will be able to decrypt the ciphertext and get the garbled circuit label. And in GLS-15, what they did is that they use generic witness encryption which works for all statements. Issue is a very expensive quantity. And actually, they don't need witness encryption to work with all statements, they just need a specific statement. Here's the specific statement. The statement is a triple of a commitment c and output y and function g. And the witness is a value x and the music proof pi proving that c, the commitment c, is a commitment of x and g of x is y. So if you had a witness encryption for this specific language, you would be good. Unfortunately, until recently, we did not know how to do that. So instead, Gargan Srinivasan in 2017 showed that you can construct witness encryption for a slightly more restricted class of language where the function g is a named function. And they showed that it's sufficient for two-round NPC. And in 2018, Rachel and I showed that if we restrict slightly differently the statement to the case where a commitment can be used a single time and it becomes insecure if you use it multiple times, then you can actually construct such a witness encryption for such a language from just two-round code. And Gargan Srinivasan in 2018 independently showed how to construct something in the intersection of both from two-round T and they also showed that this is sufficient for two-round NPC. So that's great for two-round NPC. But unfortunately, for MRNISC, we really, really need the green set of language. We cannot just use the blue or the gray one. It's too small. So in this work, I tell you what we do is that we show that we can construct witness encryption for this green language, this language of NISC's of commitments. And let me show you how to do it at a very, very high level. So remember, the goal is to be able to encrypt with regard to a statement composed of commitment C and output Y and a function G. And you need to be able to decrypt if you know a witness, that is a value x and a NISC proof pi, proving that C is a commitment of X, such that G of X is Y. How do we do that? Let's suppose that you have a commitment that is fully homomorphic. This means that if you have a commitment of X, you can compute a commitment C prime of G of X. And you also suppose that this commitment has witness encryption as above, but for G, the identity function. So just Y is equal to X. Very simple. And actually so simple that both of this property can be achieved independently from non-cremative. Like for example, a fully homomorphic commitment can be constructed from FHE and a commitment with witness encryption for the identity function can be constructed from a trundity, actually it's almost a trundity. The issue is that we don't know how to construct a commitment that satisfy both this property at the same time. So the idea is the following. Remark that a BGN commitment is already one multiplication homomorphic, so you can do one multiplication. And it also has a witness encryption for the identity function. That's actually what Gargan, Trinivada, and Newton, the Fox 17 paper. But one multiplication is not sufficient. It's sufficient to do the name as we have seen before. It's efficient to get to run MPC, but it's not sufficient to run MNISC. So instead what we showed is that we can somehow really linearize the commitment to get additional multiplication and to be able to do witness encryption for NISC of commitment for any function D. Only point of time function D, obviously. And that's how we construct witness encryption for NISC of commitment and that's how we construct Mr. NISC. So to conclude, we defined a new model, MRNISC, we showed that we can contract MRNISC from A6DH and we presented it to application. So I did not show you any of the application. Please read the paper if you're interested. And thank you very much.