 This is an abstract video for the paper Maliciously Secure Massively Parallel Computation for All But One Corruptions. My name is Rex Fernando, and my collaborators are Yuval Geles, Ilan Komorgotsky, and Elaine Shi. In our work, we're interested in studying the types of massively parallel algorithms that run inside data centers. So probably the most famous example of this is the Google search algorithm. And what we're interested in is whether it's possible to achieve meaningful notions of security for these massively parallel algorithms. The model that we're going to be working in is called the massively parallel computation model, and this model has been adopted by the algorithms community for studying the types of computation that I was just talking about. And at a very high level, it works as follows. So we assume that there's some very large input string, and we assume that this input string is divided among a large set of machines, each of which has very limited local space, much less than the size of the input string. And then these machines run a protocol together, and then the output of this protocol is defined to be the concatenation of the local outputs of each machine. So I'll go into more detail about this model in the full version of the talk, but two details I wanted to mention here are that first, the local space of each machine is limited to be n to the epsilon, where n is the size of the input and epsilon is a small constant. And second, in general, when studying distributed algorithms in this setting, we limit ourselves to algorithms that are very round-efficient. So in particular, they take logarithmic rounds or less. The particular type of question we're interested in this work is whether or not it's possible to build protocols in this model, which satisfy security guarantees that are similar to those found in classical secure computation protocols. So in these classical secure protocols, we in general assume that some subset of the parties are malicious. We can say that they're controlled by some polynomial time adversary. And in this setting, we want to protect the inputs and outputs of the honest parties. So we want to prevent the adversary from learning these inputs and outputs, and we want also the adversary to be prevented from tampering with these outputs. So going back to the massively parallel computation model, where I recall the two main constraints are first, that the local space of each machine is limited. And second, that the number of rounds is very limited. The question is, can we achieve security akin to classical secure computation while still respecting the constraints of this model as much as possible? So a natural question is, there's a ton of work on classical secure computation. Does any of that apply to this setting? And the answer is unfortunately no, because at a very high level, almost all work on classical secure computation involves simultaneous broadcasts. And this is very problematic for our model. So I'll close by quickly stating our results. And just for context, the bottom of the slide contains a table that describes the state of the art prior to our work. So with that in mind, our first result is an impossibility. We show that it's impossible to achieve anything stronger than semi-attest security in the setting of all but one corruptions. And this impossibility applies to any setting with arbitrary trusted setup. Our second result is a fully malicious secure compiler which works exactly in the setting of all but one corruptions. And the way that we make this work is we first observe that our impossibility result does not apply in the programmable random oracle model. So more specifically, we give a compiler which takes any insecure, massively parallel protocol and turns it into a secure one with minimal efficiency blow up. So I hope this piqued your interest in our work. And if so, hopefully I'll see you on Monday.