 Hello everyone. I'm Siddhartha Jayanti and I'm here to present to you our Eurocrypt 2020 paper, Efficient Constructions for Almost Everywhere Secure Computation. This is a joint paper with Srinivasan Rakhuraman and Nikhil Vyas. In this work, we are interested in creating secure computational networks. We'll denote the networks as graphs, where the nodes represent distinct processes and the edges represent secure links. We'll draw them like this. The goal is to design graphs G along with protocols Pi that give a communication protocol between every node U and node V. Sometimes nodes will fail. We'll denote by T the subset of nodes that fail. When a node fails in our model, it fails in a Byzantine way. That is to say, that fail processes can coordinate and deviate arbitrarily from the assigned protocol Pi. Traditionally, secure communication has been studied for nodes connected as a complete graph. The famous PSL theorem states that all non-failed nodes can securely communicate with each other in a complete graph if less than a third of them fail. However, notice that the theorem would require half a trillion secure links for only a million nodes. Therefore, we are interested in sparse networks, that is, graphs of low degree. In such graphs, however, failed nodes can disconnect the graph. Thereby, some nodes can be isolated and doomed from securely communicating, even though they have not failed. We call the largest collection of non-failed nodes for which we can devise a secure communication protocol, privileged. Thus, we want graphs of low degree with communication protocols that can sustain many failures while dooming as few nodes as possible and doing as little computational work as possible. In short, we want high-resilience networks sustaining t equals FFN failures while dooming at most x equals g of t nodes. Since not all failed nodes are privileged communicators, we call protocols for this problem almost everywhere secure communication protocols. The study of these protocols was initiated by Dwarc Eiffel. In their seminal paper, these authors gave a constant-degree graph that sustains up to a logarithmic fraction of the nodes failing, while dooming at most a linear number of nodes and the number of failures. Their total work is also reasonably efficient. It's linear. Uppel gave a network with improved resilience that sustains up to linearly many failures while still having constant-degree and dooming only linearly many nodes. However, his work complexity is exponential, thereby making the protocol impractical. A more recent work by Chandran et al. improved on the work complexity but gave up on the graph degree. It brought the work complexity back to linear, but the degree went up to a polylog. In this paper, we build on the ideas of Dwarc et al. and Uppel to build a simple network and protocol that have strictly better specs than Chandran et al.'s most recent work. In particular, we improve the graph degree to a simple logarithm and the total work for the first time to a polylogarithm. In this presentation, we will build up to our main result by the following intermediate results. We first build a constant-degree graph that is highly resilient and has a linear work protocol if the set of failed nodes is picked at random rather than adversarily. Next, we show that under the same random adversary model, the same network admits a more efficient polylogarithmic work protocol. Finally, we think of the previously built graph as a layer and build a graph with logarithmically many such layers. Our main result shows that this graph of logarithmic degree is resistant to linearly many adversarial Byzantine failures. Curiously, our constructions are made via the probabilistic method. In each case, however, we give random processes that build the desired graph with extremely high probability. Before we get to our constructions, we recall two building blocks that we will use from previous work. Dwork et al.'s butterfly network has constant degree, is n over log n, t log t resilient, and admits a protocol with linear work. Opful et al.'s expander network has constant degree, is n t resilient, and admits a protocol with exponential work. We illustrate each network pictorially with the graph on the right. The main idea in our first step is to build a butterfly supernode graph, where each supernode holds a mini-expander graph. We call this graph g-rand, and the small expander graphs committees. Each super edge between supernodes is realized as a complete matching between the actual nodes on either side of the super edge. The protocol pi-rand consists of two building blocks. Communication within supernodes is done via pi-expander, the exponential time protocol given by Opful. Communication across a super edge st is done in three steps. First, pi-expander is called in s. Then, each node in s transmits the message to t via the matching. And finally, pi-expander is invoked in t. Here's why it works. Let's assume that t equals epsilon n failures happen. We assume the committees are of size s, and we call a committee bad if an epsilon fraction of its nodes fail. The probability that a given committee is bad is at most 1 over epsilon to the epsilon times s. In particular, we can calculate the probability of more than b committees being bad where b is a logarithmic fraction of the committees to limit that the butterfly can handle. We conclude that this probability is exponentially small if the size of the committees is omega log log n. The fact that this probability is extremely small is crucial to later steps in the construction where we will have to take a union bound. By the butterfly guarantee, our calculation means that at most a constant fraction of committees are doomed. By the expander guarantee, we are guaranteed that at most a constant fraction of nodes in good committees are doomed. When we put all this together, we ascertain that privileged nodes in privileged committees are privileged. And thus, a constant fraction of nodes are privileged in our construction. How efficient is our protocol? Recall that committees are of size s. That pi expander thereby does exponential work in s. And that pi edge also does exponential work in s. Now, let us calculate the cost of a single UV communication where u is in committee cu and v is in cv. The total work is bounded by the product of xs, the maximum path length between cu and cv in the butterfly network, and the number of such paths used by the butterfly's secure communication protocol. Thus, substituting in s equals o of log log n, we see that the total work is linear. The next step in our construction is motivated by the observation that cutting down the number of paths used to polylogarithmically many would reduce the entire work complexity to a polylogarithm. Now, we embark on the second step of our construction, where we modify the butterfly communication protocol. To do this, we recall how the butterfly communication protocol works. In a UV communication by the original protocol, the sending node u floods linearly many paths from u to v with the message. And the receiving node v takes the majority of the received communications to be the real message. In the proof of correctness, Dwork et al show that at most one-third of the paths between privileged nodes contain a failed node. We exploit this fact to observe that if we randomly select o of log n of the original paths between each uv pair and only flood the selected paths with the message, we will still succeed in successful transmission with a fairly high probability. In particular, the turnoff bound shows us that a majority of the paths will be good with probability at least 1 over n to the 4 for each pair uv that was previously privileged. Now is where the probabilistic method comes in. A union bound over all pairs shows us that all uv communications between privileged nodes will survive with positive probability and thus some correct selection of o of log n paths exists. This is how we reduce the communication cost. Looking back at our previous calculations, we see that substituting in the new efficient protocol on the butterfly graph into the old network improves the work complexity of our previous construction to polylogarithmic in the number of nodes n. Until now, we have worked with the random adversary that can cause a random set of epsilon n nodes to fail and have shown a constant degree network and an extremely efficient protocol that is highly resilient to such failures with all but exponentially low probability. Now to the step that we've all been waiting for, where we will construct a log degree graph that is resilient to linearly many adversarial failures. The new graph is constructed with o of log n layers of edges where each layer EI is identical to our previous set of edges Erand with the vertex names permuted randomly. The protocol is also extremely simple. The sending node sends via pirand on each of the layers and then the receiving node takes a majority. Now we present the highest level of intuition on why this simple protocol suffices. Notice by our previous derivation that for a fixed set of epsilon n failed nodes, each layer which was picked randomly with respect to the adversary fails with the extremely small probability of 1 over epsilon to the o of n over log n. So when we take the majority of all layers, even the logarithm dividing the exponent goes away and we get the probability of our protocol failing to be less than 1 over epsilon to the o of n an extremely, extremely small probability. But it turns out that we need all of this strength because in our final step, we take a union bound over all of the n over epsilon n possible adversaries. The probability in the previous step is so small that even after we take a union bound over all of the n choose epsilon n possible adversaries, there's a positive probability of successful message transmission. Thus, by the probabilistic method, there's some layering that works deterministically. In fact, an exact calculation of all of this in the paper, which I urge you to read, shows that randomly picking the layers suffices to work with all but exponentially small probability. So here, we see once again the slide that I showed you at the beginning that walks through the three main contributions and the order in which we got to our final result. Now I leave you with the main open question in this problem. We've showed a protocol with optimal resiliency and almost optimal work. Maybe the work can be brought down from a polylog to a log. But the degree of our graph is o of log n. Uppel in his previous work showed that if we were willing to give up on the work complexity of polylog and we're willing to go up all the way to exponential, then theoretically it's possible to get a constant degree graph. So now the main question is, is there a construction that brings the degree down to o of 1 while keeping the work complexity at a polylog? With that, I'd like to end my presentation and thank you all for kindly listening.