 Thanks for the intro, Manac, and thanks to all of you for holding out until now. I'm Nico, and this joint work with Nils Fleischacker, Johannes Krupp, and Dominique Schröder. The title of this talk is Two-Message Oblivious Evaluation of Cryptographic Functionalities, but what I really want to talk about is how to use security proofs in a non-blackbox fashion. To elaborate on that, let me give you a little intro of what I mean with that. Usually when we want to prove security of more complex protocols, we first try to reduce the security of this protocol to the security of some primitive. Think of this primitive as, for instance, digital signatures. Then in turn, this primitive might come with a hardness reduction of its own that reduces the security to some hardness assumption. In turn, if we compose those two reductions, we will get a security reduction that bases the security of the protocol on the underlying assumption. That's how we usually do it. What I will show you today is a slightly different way of basing security of some protocol on a hardness assumption once this is not possible. I will show you a way to compile a primitive into a protocol and a corresponding way to compile the security proof of the primitive into a security proof for the protocol. Let me set the stage for that. We will do a secure function evaluation where we have two parties, Alice and Bob, who want to compute a function f jointly with input x by Alice and input y by Bob. Say we have a protocol pi. Now we say such a protocol is secure against malicious adversaries if, say, once one of the parties is corrupted, in this case Bob. If there exists a simulator that simulates the view of this corrupted party while having only black box access to some ideal functionality that computes this function f. What is usually important is that this simulator s runs efficiently. Some well-studied aspect of secure function evaluation is its round complexity. When we started with the work of Goldreich and Oren, we know that secure function evaluation is not possible in two rounds or with two messages if we require efficient simulation. What Goldreich and Oren showed was that zero knowledge, which is an instance of secure function evaluation, is not possible in two rounds. This is an unconditional impossibility result. We also have further-going results that show that black box techniques are insufficient to go below five rounds. There's the work of Katzenostrovsky who showed that if we only use black box techniques, we will need five rounds of interaction. Then we know that if we grant ourselves a setup, then we can do secure function evaluation in two rounds or with two messages. A full-claw result is to use fully homomorphic encryption and NISIX. For instance, this was mentioned in Craig Gentry's thesis. The motivation of this work is to look into what level of security can we achieve in two rounds without any setup. One possible route is, of course, to relax the simulation requirement. As we know, these impossibility results do not hold for unbounded simulation. Those two impossibility results that are mentioned, they assume that the simulator has to be efficient, so there might be hope. Let me now show you a rather general blueprint to do secure function evaluation in two rounds, a very intuitive approach based on fully homomorphic encryption. Now, say Alice has an input X1. She generates public and secret keys for a fully homomorphic encryption scheme and encrypts her message X1 under this key and sends both the public key and the cipher text to Bob who can homomorphically evaluate a circuit C where he hardwires his own inputs on Alice's input or on Alice's cipher text C1. So Bob will get an output cipher text C2, sends this back to Alice who can use her secret key to decrypt. This is an intriguingly simple protocol and, of course, a natural question is, what security does it offer? Now clearly, if we use a fully homomorphic encryption scheme, it should provide in CPA security. So the privacy of the receiver's input is guaranteed by CPA security, but for sender security or sender privacy, we need a slightly different property called circuit privacy, which guarantees that this circuit, this cipher text C2 that the receiver obtains and codes no information about the circuit C and thus the input X2 that Bob used to compute this cipher text C2. Okay, let me elaborate a little bit more on circuit private FHE. So usually we define circuit privacy via the existence of some simulator. We say that a scheme has circuit privacy if given a cipher text C which has been computed by homomorphically evaluating a circuit C on some well-formed input cipher texts. If this is indistinguishable from a cipher text C that has been computed by a simulator who just got the output of the circuit C on inputs X1, X2, X3 and so forth, and this should still be indistinguishable if a distinguisher gets both the output cipher text C, the input cipher text and public and secret keys. Now we're looking into malicious security in this talk, so semi-honest circuit privacy is not enough. Therefore we're looking into malicious circuit privacy. Malicious circuit privacy requires an extra algorithm called the extractor. In malicious circuit privacy a distinguisher may actually choose both the public key and the input cipher text that go into the circuit and thus those cipher texts might not even correspond to a well-defined message. So we cannot just say they correspond to some messages X. So what we do is we define an extractor X that given those cipher texts extracts plain text messages and we can now pass them on to the circuit C, compute on C and pass the output again to the simulator and we require that those two distributions are indistinguishable under adversarial choice of public key C1, C2, C3. Sure if the public key is maliciously chosen there might not be a well-defined secret key but we require that those two distributions of C are indistinguishable given under this malicious choice of pk, the input cipher text C and some auxiliary input maybe. And one thing we immediately notice is that this extractor here cannot be efficient. There's no setup assumptions involved here. So if this extractor was efficient we could directly use it to break the NCPA security of the homomorphic encryption scheme. Let me remark one more thing. Ostrowski, Paskin and Paskin who introduced this notion showed that it can actually be obtained in a rather simple manner from semi-honest circuit private homomorphic encryption. Basically what you do is once you're finished with a computation of some semi-honest scheme you set some information theoretic garbled circuit on top that verifies that this computation has been done correctly and only then releases this cipher text C. All right so when we use such a maliciously circuit private FHE scheme to do secure function evaluation what we get is that the FHE directly adds security against a semi-honest sender. So as long as the sender only sees the cipher text or behaves honestly then you can also see the output of the receiver everything is fine and we can simulate. However it's different with security against malicious receivers right. In order the security experiment for circuit privacy involves this unbounded extractor which means that if this circuit C that we want to compute comes with some security some computational security guarantee of its own we're sort of sort of in a bad place so we can directly use this protocol that I showed based on circuit private FHE to compute functionalities that have sort of some information theoretic flavor that don't have any computational hardness notion attached to them. As I said simulation in this case will turn any adversary also an efficient adversary into an unbounded adversary this will prohibit further composition. Okay but then the question is what if we want to use this approach on some cryptographic functionality where we cannot hope for anything better than computational security this is where our work starts. We will first provide some well-defined security notion for cryptographic primitives in this setting so what happens if we take a cryptographic primitive and run this protocol on it who will provide a new secure security notion called SFE with induced game-based security for cryptographic primitives and this will hold against malicious receivers and in our paper we'll also show that having a malicious party on one side is the best we can hope for since there's since otherwise we could overcome some some strong black box impossibility results and one crucial point will be that the underlying primitive that we're looking at comes with a certain kind of hardness reduction on its own. So I'll show you a novel way of composing security proofs in a non-black box matter that allows us to compile the security reduction of some primitive into a security reduction for our output protocol. And the main idea of this technique is to take a reduction for a primitive and evaluate some part of this reduction inside a homomorphic encryption scheme we'll see that in a moment but first I'll define this notion of induced game-based security. I think we have some cryptographic primitive F think of maybe again digital signatures they come with some security experiments say the EUF CMA experiment where the adversary is provided access to some signature oracle can query messages and receives signatures on that and induced game-based security is basically defined via a similar experiment we don't we don't change anything on this side but now instead of giving the adversary access to the primitive directly the primitive is evaluated underneath a homomorphic encryption scheme or any other SFE protocol you may think about but for simplicity let's stick with homomorphic encryption so instead of having direct access to F can choose a public key and encrypt and encryptions send this here and the primitive F is evaluated homomorphically so A gets an output ciphertext and as I said this homomorphic experiment is really identical to to the original experiment the only change that we do is in A's access to this primitive. All right let's start out with a naive try to base the security of this homomorphic experiment on the underlying security of the of the primitive that we use right so now we want to show that such an adversary here just has negligible advantage in winning such an experiment so we're building on circuit private homomorphic encryption so what we want to do is just plug in this notion of what circuit privacy provides us namely we replace this evaluation that we had by first extracting A's input to this primitive F and then once F is computed on on this will wrap the output of F into a ciphertext via this simulator so what I can try to do now is just regroup machines that take both extractor and simulator and pull them into this adversary A and now I actually have an adversary that plays against the security experiment of this primitive only problem is this machine X is unbounded so we end up with an unbounded adversary and in this setting cryptographic primitives usually don't provide any security guarantee so to get around this conundrum what we're going to do is we'll look into a security reduction for our primitive and we need a for our technique to work we'll need a certain kind of security reduction we call this oblivious black box reduction now the reduction should are should be black box meaning it only makes black box access to the adversary A and it should be oblivious in some sense that now the oracle to the primitive that A requires are should not be able to peek into A's query queries so the reduction can basically not memorize what it has seen about A's queries into this primitive and once we have such a by the way let me know let me remark that this kind of reduction is actually quite common there's many reductions basing security of adaptive primitives onto security of their selective versions so this is not something non-standard but let's equip ourselves with something like this and try to finish this proof now so the first two steps are actually quite similar we're using circuit privacy and now we regroup our machines but now we we still have this unbounded adversary A here but what I'm going to do now is I'll take this adversary A prime here and I'll plug it into this reduction this oblivious reduction R why should this this even work now the reason is I required R to be a black box reduction and black box reductions don't care how how some adversary breaks some primitive it can use some superpoly powers do whatever it only cares that the adversary has some advantage in breaking this primitive so since the reduction R is black box it should still work with some unbounded adversary A prime and what I'm going to do now is I'm basically reversing the circuit privacy so first I regroup regroup both the extractor and simulator into this oracle thereby ending up with an inefficient reduction but now I can use circuit privacy again to replace this with an efficient implementation of a homomorphically evaluated oracle right so even though this oracle might do something completely different than the original primitive f I have established that this reduction R with a homomorphically evaluated oracle can actually break some heart problem p and in the remaining time let me quickly talk about two applications of this method that the first one is a rather generic construction of blind signature schemes we can we can take any signature scheme with an with a non adaptive security reduction for instance based on chameleon hashing and the requirements that we have for the homomorphic encryption are actually quite weak so we don't need compactness in this case we can start with a non compact homomorphic encryption based on oblivious transfer and garbled circuits another applications application is two message oblivious pseudo random functions and here the primitive we start with our pseudo random functions with oblivious reductions and this can actually be obtained from the now Rheingold prf but in this case we actually do need compact FHE so let me wrap up real quick what I've shown you is that certain security reductions can be used in a non trivial non black box way for composition and this allows us to circumvent certain barriers in two message secure computation thanks for your attention