 My name is Andrew Morgan. In this talk, I'll be presenting our recent research done jointly by myself, Rafael and Antigone, which constructs a protocol for succinct non-interactive secure computation. We'll focus on the setting of secure two-party computation originally proposed by Yao, where we have a receiver with input X and a sender with input Y who wish to jointly compute some functionality F on their respective inputs. To compute this function securely, we intuitively want the property that an adversary who controls one of the two participating parties shouldn't learn anything besides the output F of X and Y. This intuition has been formalized as simulation-based security, which requires that a computationally bounded adversary should be unable to distinguish between the real execution of the protocol and an idealized execution with a trusted third party and simulated messages. Additionally, while two-party computation was originally studied in the semi-honest setting where adversaries could attempt to learn private information but otherwise were required to adhere to the protocol, we'll consider simulation-based security with fully malicious adversaries who are able to deviate from the protocol arbitrarily. In terms of round complexity, the best we can possibly hope to achieve is two-round or non-interactive secure computation, abbreviated NISC. As a one-round protocol would inherently be susceptible to residual attacks where the party receiving the message would necessarily be able to compute the output of F on an arbitrary input. Furthermore, for the same reason, it is inherently required that only the receiver rather than both parties obtain the final output. Even given minimal round complexity, we still want to minimize the communication complexity of a secure computation protocol. In particular, if we had a protocol where both the communication complexity and the receiver's running time only depended on the input and output lengths of the functionality F rather than the running time of the functionality itself, such a protocol would be ideal in a private outsourcing of computation setting where, for instance, a computationally weak client, the receiver, wants to outsource a complex computation to a powerful server, the sender, while still maintaining privacy of their inputs and the server's private data. This property, which we'll refer to as succinctness, was in fact one of the original motivations given by Gentry when he introduced the now well-known and widely used primitive but fully homomorphic encryption. We'll discuss this in more detail later. Tabling succinctness for a moment, we know of many non-interactive secure computation protocols which achieve malicious security in various models with trusted setup. In the plain model, however, life isn't as easy. In fact, it is well known that using the standard definition of simulation-based security, four rounds are both necessary and sufficient for security against a fully malicious adversary in the plain model. So, to even think about developing a two-round protocol, we'll need to relax the definition of security somewhat. In particular, we'll consider a well-studied relaxation of simulation-based security where the simulator is allowed to run in quasi-polynomial time, though the adversary is still restricted to polynomial time. It turns out that, using this relaxed notion of superpolynomial time simulation-based or SPS security, there do exist two-round protocols. Bodger Narayanan et al. in 2017 demonstrated that a two-round protocol with malicious SPS security in the plain model could be based on sub-exponential security of any one of a variety of standard assumptions. However, none of the protocols we've discussed so far satisfy the succinctness property we mentioned earlier. As promised, let's return to this now. All the protocols I've mentioned, even those which require models with trusted setup, have communication complexity which is polynomial in the functionality's running time. This, of course, would make state-of-the-art protocols less than ideal in the outsourced computation scenario we discussed earlier. Instead, what we would like is, for the communication complexity and receiver's running time to be polynomial in only the input and output size of F and polylogar rhythmic in, or essentially independent of, the functionality's running time. We'll refer to this property as succinctness, as mentioned before. This brings us to the main question our work will answer, which is, can we construct a noninteractive secure computation protocol that is both succinct and maliciously secure? In fact, as we mentioned, this question is open not only in the plain model, but also in models with trusted setup, such as a common reference string. To begin investigating this question, let us revisit Gentry's original suggestion towards succinct NISC using fully homomorphic encryption. To see how we can construct succinct secure computation using FHE, let's begin by having the receiver generate a key pair, encrypt their input X, and send the public key and ciphertext to the sender. The sender encrypts their input Y, and homomorphically evaluates the function F on the respective input ciphertexts to obtain the ciphertext for the output. Finally, they send the resulting ciphertext to the receiver, who can simply use the secret key to decrypt it and obtain the result. This protocol clearly satisfies our notion of succinctness since only the sender performs the homomorphic evaluation and only ciphertext in the public key are sent, but it's also fairly plain to see that this only satisfies security when the adversary is acting semi-honestly. That is, not deviating from the protocol. As a malicious adversary can, for instance, evaluate using the circuit of their choice rather than using the circuit that correctly evaluates the functionality F. One potential fix for this would be to use a ZK snark or a succinct non-interactive argument of knowledge to have the sender prove that they indeed perform the correct computation, but this would be somewhat unsatisfying as ZK snarks aren't known in the plain model or, for that matter, from standard assumptions. Instead, let's investigate another way of obtaining such a soundness property, using protocols for what is known as delegation of computation. Using such a protocol gives us a succinct, albeit not input private, way of outsourcing computation in a way that can later be publicly verified to demonstrate how this works. The receiver can set up a key pair, send their input and the public key to the sender, and the sender can then compute the result Z of the function F of X and Y, including a proof of the computation's validity, and then send their input, the output, and the proof to the receiver, who can then verify that the proof is a valid proof of F of X and Y equals Z, and output the result if it verifies or reject, if not. Note that some delegation schemes additionally have the property that the first round depends on the functionality F. We'll focus on adaptive schemes where, like in the message flow outlined above, this is not the case, and the functionality can be decided by the sender in the second round. Importantly, this approach gives us soundness or that the receiver can reject if the sender performed the computation incorrectly, and it also maintains our property of succinctness since the proof size and the verification algorithms running time are polylogarithmic in the functionality's running time. Furthermore, we know of many delegation schemes for arbitrary polynomial time functionalities, including adaptive ones, which are based on standard assumptions. Of course, while delegation fixes the issue of proving that the sender computed the result correctly, the fact that verification requires the inputs to be public means that, in the setting of secure computation, delegation doesn't even give semi-honest security. Recently, a delegation scheme was constructed that provides input privacy for the sender, but this still falls short of the full simulation-based security that we need for secure computation. Thus, the main question from earlier of whether we can construct a succinct and maliciously secure, non-interactive secure computation protocol remains open since, as we mentioned, the protocol of Bodger Narayanan et al. gives malicious security with super polynomial time simulation, but not succinctness, and indeed, known maliciously secure protocols of trusted setup also don't achieve succinctness, while, on the other hand, using a primitive such as delegation or FHE out of the box gives succinctness, but at best, semi-honest security. In our work, we answered this question in the affirmative by combining all three of the approaches I've already outlined to construct the first protocol that gives both succinctness and malicious security. In fact, depending on the underlying primitives, our protocol gives either SPS security in the plain model or polynomial time simulation-based security in the CRS model. Hence, we effectively answered the above question in both of these models. Our main theorem shows that, given the three primitives that I've already discussed, namely, a non-succinct maliciously SPS secure, non-interactive secure computation protocol, like that of Bodger Narayanan et al. Quasi-pollinomially secure, fully homomorphic encryption, and quasi-pollinomially sound delegation with the adaptivity property I discussed, that is that the first round is independent of the functionality. We can construct in the plain model a maliciously SPS secure NISC with succinctness, which computes any polynomial time computable functionality. In fact, as a corollary, we observed that our protocol can be based on quasi-pollinomial hardness of the learning with errors assumption by the following chain of implications. Namely, Berkersky et al construct an adaptive delegation scheme from FHE, which can in turn be based on LWE, while a NISC of Bodger Narayanan et al can be constructed from a variant of oblivious transfer that they refer to as weak OT, which in turn can also be instantiated using LWE. Second, as we mentioned, a slight variation of our protocol gives security with polynomial time simulation in the CRS model, based on polynomial time hardness of LWE, by using an underlying LWE-based NISC protocol, which likewise gives polynomial time simulation in the CRS model. To build up our protocol, we'll start with a semi-honest approach using FHE, and then use delegation to prove that the sender's homomorphic evaluation was computed correctly. Of course, this by itself is far from a complete solution. First of all, remember the delegation inherently requires revealing both parties inputs in this case, the FHE ciphertexts. Despite being encrypted, we will still need to ensure that each ciphertext is hidden from the other party, both for the sake of privacy and input independence, or in technical terms, extractability of inputs. To that end, we'll use the underlying secure computation protocol to perform the verification step of the delegation while hiding the respective inputs. Notably, even though the inner protocol isn't inherently succinct, the fact that the verification step of the delegation is by definition efficient means that computing it in the inner protocol won't invalidate the succinctness of our final protocol. With this, we can securely prove correctness of the sender's message, but this isn't quite sufficient yet as we will still need to prove correctness of the receiver's first message. To do this, we will add additional checks to the underlying protocol that verify the receiver's first message with respect to their input and the randomness used. Interestingly, these checks will require us to use perfectly correct FHE and delegation to avoid the possibility of a malicious receiver using adversarial randomness to verify a specific first message that could compromise the protocol's correctness. Next, I'll go over each of these above steps in more detail and show how we arrive at the complete protocol. Let's begin by naively combining FHE and delegation. The high-level idea, as I mentioned in the last slide, is to delegate the computation of the homomorphic evaluation used to generate the output ciphertext. Here, I'll highlight the FHE steps in blue and the delegation steps in red for clarity. First, the receiver generates the keys for both the FHE and delegation and encrypts its input using FHE and sends the ciphertext and public keys to the sender. The sender encrypts its own input and uses the delegation protocol along with the two input ciphertexts to generate the homomorphically-evaluated output ciphertext and the proof of its correctness. The sender then sends the proof and its input and output ciphertext to the receiver who verifies the sender's computation using the delegation protocol, rejects if it doesn't verify, and decrypts and returns the output otherwise. While it's fairly clear to see that this protocol is correct and succinct, there are, of course, several major problems, starting with the fact that while the receiver needs the sender's input ciphertext to verify the sender's computation, the receiver also has the secret key and so a malicious or even semi-honest receiver can simply decrypt that ciphertext to reveal the sender's input. To fix this, we'll need to make the ciphertexts private while still verifying the delegation somehow. As I mentioned before, this will be where we use our underlying secure computation protocol. This instance will take all of the inputs for the delegation's verification function from the respective parties and perform the verification, returning the respective ciphertext only if it accepts. This lets us modify the protocol to no longer reveal the sender's input ciphertext and simply run the underlying NISC in parallel to perform the verification instead. Of course, we're still not done. The next major issue is verifying that both parties' input ciphertexts are generated correctly from the respective inputs X and Y and, importantly, independently from any other inputs. Without this verification, for instance, a malicious sender could easily take the receiver ciphertext for their input X, use homomorphic evaluation to mall it into a ciphertext for X plus one, and give the malled ciphertext as its own input ciphertext. In a technical sense, the simulator in our security proof needs to be able to extract the inputs of both parties to send to the ideal functionality, which is impossible without this guarantee of input independence. To perform this verification, we'll once again turn to the inner NISC. Adding as inputs both parties' inputs X and Y, the randomness R sub X and R sub Y used to encrypt them using FHE and the respective resulting ciphertexts. And the NISC will then verify that the ciphertexts are correct with respect to the inputs and randomness. Notably, we do require the receiver's ciphertext ct sub X to be input by the sender to additionally verify the receiver to present the correct ciphertext in its first message. This will also require perfect correctness of the FHE to deal with the aforementioned issue of a malicious party using adversarial randomness to input a ciphertext that would verify but might compromise the correctness of the homomorphic evaluation. At this point, we can successfully verify that the sender has performed their part of the computation correctly. To verify the receiver's first message, however, even if the ciphertexts are correctly generated, we still need to verify that the public keys are correctly generated as an invalid public key sent by the receiver would likewise invalidate the security properties of the FHE and delegation. We do this in the same manner as with the ciphertexts by using the inner NISC. We add as inputs the public keys and the respective randomness from both the FHE and delegation and additionally verify that the keys are correctly generated with respect to that randomness. As before, the sender inputs the public keys themselves to verify the receiver sent the correct keys in its first message. Additionally, since the randomness is used to generate both the public and secret key at once, we no longer need the delegation's secret key as an input and can instead implicitly verify the secret key by generating it from the given randomness and using the generated key for the inner verification. Similarly to why we required perfectly correct FHE when verifying the ciphertexts, this step will require perfect correctness and completeness for the delegation as well as perfect correctness of FHE to avoid interference from adversarily generated randomness. Now that the senders and receivers messages are verified, the final issue to deal with is that the output from the inner protocol might not necessarily be simulatable for the purposes of the security proof, since we don't know the distribution of output ciphertexts that might be returned by the homomorphic evaluation and thus output by the inner protocol. We could deal with this by using FHE with a re-randomizability type property and re-randomizing the output ciphertext before returning it from the inner protocol to make the distribution predictable, but doing so would present additional issues. Instead, we make use of an observation similar to that which we made in the last slide. Since the inner protocol knows the randomness used to generate the FHE keys, they can in fact generate the secret key and decrypt the output itself. So we simply move the decryption inside the inner protocol and let the receiver obtain the decrypted output if all of the verifications are successful. With all of these issues dealt with, we've arrived at the final version of our protocol for which we formally prove security in our paper. In order to complete the proof, there were a few technical subtleties that required creative solutions on our part. I'll talk briefly about what we consider the most interesting and important of these, which occurs when considering the receiver's output when the sender is malicious. Namely, the ciphertext CT sub X for the receiver's input needs to be simulated without knowledge of X itself, but the ciphertext is also an input to the inner protocol which, since it's given the randomness to generate the secret key, implicitly already knows how to decrypt X. In technical terms, because of this dependence, it's not immediately clear how to reduce simulatability of the ciphertext to the security of FHE since a hypothetical FHE adversary knows only the public key and thus can't run the inner protocol to obtain its and consequently the receiver's final output. In fact, historically, similar situations in other protocols have been shown to allow a malicious sender to use this dependence in order to make the receiver's output subtly correlate with their input breaking security. In our case, however, we can avoid this ostensible circularity by a careful sequence of hybrids. At an extremely high level, we introduce a hybrid where the inner protocol's functionality no longer depends on the secret key. Specifically, it removes the FHE's randomness from the inputs and since it can no longer generate the secret key, but knows the inputs X and Y, it can simply return the result F of X and Y rather than decrypting the output ciphertext when the delegation verifies correctly. We can then prove indistinguishability based on the soundness property of delegation and from that hybrid reduce simulatability from the ciphertext to the security of FHE. In the interest of time, I'll defer detailed discussion of this issue and of our solution to the paper. So in conclusion, we present the first protocol for maliciously secure non-interactive secure computation with an additional succinctness property. That is that the communication complexity and receivers running time are essentially independent from, that is polylogarithmic in the functionality's running time. We achieve both super polynomial time simulation based security in the plain model and security with polynomial time simulation in the CRS model. No succinct protocols were known in either model prior to ours. Our protocol combines FHE and delegation in a relatively intuitive way and leverages a non succinct, inner secure computation protocol to attain security. Moreover, all of these primitives and thus our protocol can be based on the hardness of the learning with errors assumption. This concludes my virtual talk. Thank you very much for listening and I will be able to take questions during the live part of the conference.