 Okay, so let's continue with the second talk of the session about Adaptively Secure Computation for RAM programs by Lasia Bangalore, Rafa Ostrowski, Oksana Pugorinjana and Mutu. And Lasia will give the talk. Thank you for the introduction and good morning everyone. So today I'll be talking about adaptive secure computation for RAM programs and I'll try to keep it interesting given adaptive security is like heavyweight and on top of that RAM programs itself is like can get unbilled. So our main result is to construct a constant round two PC that achieves full adaptive security and we just use require minimal assumptions. So the focus of the stock is communication efficiency. More specifically, we're interested in the RAM complex communication complexity as expressed as the RAM complexity of the function instead of it depending on the circuit complexity which is nothing but the circuit size. So secure multi-party computation is something we're all familiar with. We have N parties who wanna compute a function securely and we are interested in various corruption strategies like we have static and adaptive corruptions in the static, the adversary chooses the parties to be corrupted at the beginning of the protocol and they're fixed in advance and this is not really realistic because like you could have hackers who look at the communication during the protocol and then decide which parties to hack. So in general life is adaptive and we'd like to stick to adaptive security. So during an adaptively secure protocol the corruptions happen based on the communication potentially. Even we're specifically interested in fully adaptive MPC. So here all the parties could potentially be corrupted by the end of the protocol. This is interesting because it has importance when you're looking at a protocol that is part of a larger protocol. And even when all the parties and the sub protocol are corrupted we want to ensure that the larger protocol is secure. And also for randomized functionalities where the randomness is not known we want to make sure that even when all the inputs and the randomness used in the protocol is leaked we want to guarantee some security properties. Full adaptive like full corruption is trivial in the static case. We don't need to do anything additional but it's really challenging when it comes to the adaptive setting. So to show our protocols are adaptive security we need to provide a simulator. And the simulator first simulates the communication without knowing the inputs. So it basically generates some sort of fake transcript. And then later when you get to know the inputs of the parties you basically want to lie about it like you want to equivocate these inputs and say here's the randomness that would generate this transcript and it is consistent with the inputs that you learn at a later point. So throughout the stock when we are talking about adaptive adaptively secure gobbling schemes or any of it we are basically going to show how to generate fake transcripts and then show that it can be equivocated by providing the correct randomness. So here the function F that we want to securely compute can either be expressed as a RAM program or a circuit. For circuits you can think of standard Boolean circuits and they're super efficient for highly structured computations such as FFT. And when we talk about circuit complexity we're just measuring the costs in terms of the number of gates in the circuit. Whereas for RAM, RAM is nothing but circuits but you have additional memory accesses which makes it more expressive. Moreover, you can have high level of languages that can be easily reduced to RAM programs. And when we are looking at the RAM complexity we are interested in the running time. So we measure all the costs as orders of the running time of the RAM program. So to quickly look at what the RAM model looks like you have CPU step circuits each of them is a really small circuit for natural RAM programs it's of size polylog T but T is the running time. And this is the flow of the execution of a RAM program. So essentially you have CPU step circuits that take a state as well as read some value from the memory and then output a particular value to be read for the next CPU step. So looking at the feasibility results for adaptive MPC for circuits specifically, Kennedy et al in 2002 showed that we can construct protocols for fully adaptive secure MPC but they required order D rounds where D is the depth of the circuit. What we are interested in is constant round protocols. So we just focus on such protocols from now on. So given specific assumptions like reliable erasures or ION CRS model, we know how to get constant round adaptively secure protocols. And then later there was a surprising result by Kennedy et al in 2017 which solves like a 30 year old problem that you can get constant round adaptive secure protocol by just minimal assumptions. And later the precise round complexity of these protocols was shown to be just two rounds by Ben Hamoud et al. So look focusing on just the communication complexity. We have protocols that are optimal communication complexity. Basically they are independent of the size of the circuit but they use really strong assumptions like huge CRS as well as IO based assumptions. And there have been improvements. The CRS size has been reduced to just depend on the depth of the circuit but still the strong assumptions remains. If we move on to the setting of minimal assumptions we have works that have communication that grows quadratic in the circuit size. And this is a line of work that we are interested in. So the question we ask is can we improve this communication from quadratic in the circuit size further? Or is this just inherent for all adaptively secure MPC protocols with constant rounds? So our answer is yes. We can improve it to depend on the RAM complexity of the function specifically square of the RAM complexity. So we are getting an analog of the results in the circuit setting to the RAM setting and show that it just can be T squared where T is the running time. So looking at prior work for RAM programs in the static setting, we have a whole line of works that improve the communication of RAM programs and they obtain communication that just has polylocked the overhead. And the last work basically obtains a construction that is black box and all of its underlying primitives. Whereas in the adaptive setting, there are works that are dependent on the RAM complexity, but again, they use CRS and IO based assumptions. So the current state of affairs is either you have like really strong assumptions on one hand or you're dependent on the Boolean complexity of the function. So what we try to get is even under minimal assumptions, we try to construct on adaptively secure protocol that has communication that is proportional to the square of the RAM complexity. And what we need are adaptively secure OT and the secure adaptive secure channels are realized using non-committing encryption. So the focus for the rest of the talk is two PC in the semi-honest setting. We have a result in the malicious setting as well, but can refer to the paper for more details. So going to the technical side, we want to put forth all the challenges that we come across when designing adaptive protocols and for Google RAM and how to overcome them. So to start off with, we begin with a really naive attempt. So we know a lot of protocols for circuits, right? So why not just convert a RAM program into a circuit and then apply all the techniques that we already know. So if you take any generic RAM program and you apply deterministic transformation on it, you get a circuit and you can use CPV17 or any of the constructions to get adaptively secure protocols for RAM programs. But the challenge here is the circuit size is quite large. In fact, it's T cubed where T is running as the running time. And if you compile it with the adaptive secure protocol of CPV17, which is quadratic in the size of the circuit, you end up with a communication of T to the six. So using existing techniques really doesn't work. So we have to do something different. We try to gobble each of the CPU step circuits. So I mean, for RAM programs, these step circuits are potentially very small of size polylock T. So it makes sense to just look at adaptively gobbling adaptively gobbling these small circuits instead of transforming it into a new circuit. So the challenges that we come across are basically handling the memory accesses and also these step circuits are in some sense connected and their input values are outputs of the previous step circuit and also they read from the memory. So adapting the existing adaptively secure works for this setting is not straightforward and we show how to handle each of these challenges for the rest of the talk. So addressing challenge one is the simpler version like we focus on oblivious RAM. So we know how to protect memory accesses for gobbled RAM setting you basically use an ORAM. And ORAM has like, if you have two parties, Alice and Bob, Alice has like a huge database and Bob wants to make accesses to this database and queries at various memory locations which need to be protected because if Alice learns the memory access pattern then this would leak something about Bob's inputs. So we use ORAM here, but to achieve adaptive security we need something stronger. We show that we need the equivocal property which I described during the definition of adaptive security. So what we need is when Alice is corrupted the simulator needs to be able to generate a fake oblivious memory access pattern. And later when Bob is corrupted the simulator needs to show that this memory access pattern is actually consistent with Bob's inputs. And this is done by providing the appropriate randomness. So if you take any statistical ORAM we show that we have randomness that exists to show consistency between these oblivious memory accesses and the actual memory accesses. But the question here is, can we extract them efficiently? We need a very efficient algorithm to extract such randomness otherwise the overall communication complexity of our protocol blows up. So this is the additional requirement we need from oblivious RAM. So here the reason we need this is because the randomness extraction part is actually integrated into a garbled RAM and affects the size of the circuit directly. So next week we go over a particular key pre-brazed ORAM but the techniques that we discuss here can be applied to any generic tree based. ORAM protocols. So here consider a database D which has this eight memory locations. And when you apply the ORAM transformation you essentially get a huge tree based structure where each node in the tree has multiple elements. Could potentially store multiple elements. And the properties that we have is every memory location is associated with a leaf node industry. And what that means is the value of this memory location is found on the path somewhere along the root to the leaf node. And whenever you have a read operation it gets translated into two passes from the root to the leaf node. And these passes from the root to the leaf node look random. So essentially we have the first pass which is the accessing the memory location and the second pass is to ensure that after you read a value it's moved to a different location because otherwise you have like linkability attacks if you do not move data around in this tree. So let's run a simple example. Suppose you have a read operation where you want to look up the value at position three which is C. In this case the first thing you do is you run down along this highlighted path along the tree and you access C. You don't wanna keep C in the same position so you have to move it to a different location. Before we do that we actually assign a separate leaf node to C which is the leaf node two in this example but we do not traverse down through it because if we traverse down through the leaf node that it is associated with then the adversary would know that this is the path that C is on. Instead we move C to the root and then flush it down along a randomly chosen path such that it remains along the path of leaf node two. So these techniques are standard and we pick the simplest ORAM protocol by Chang and pass to illustrate this. So essentially the two things to note here are the two passes that you make through the tree are essentially random. They're not dependent on the inputs directly and when you present the randomness associated with the ORAM later that's what gives meaning to these two random passes that are made through the tree. So when you read memory location three again at a later point we wanna make sure that you don't end up in some random path but actually read the path that C is on and so the path that C is on is the leaf node two that we picked but did not travel earlier and that is the first traversal that you make when you access the memory location three again. So when you're reading the same memory location again we want to ensure consistency and this is done by providing the correct randomness. So here to ensure adaptive security for ORAM we need to generate the randomness that is you have actual memory accesses and oblivious memory accesses and the randomness that is used within the ORAM for going from one to another is what we wanna extract. So the simulator for ORAM first just samples two M arbitrary leaf nodes that are completely independent of the inputs and this is set as the oblivious memory accesses and later when it gets to know the actual real memory accesses it needs to provide the randomness and here the randomness if you think about it is essentially the leaf nodes two that we saw in the example that you did not traverse through. So if the same memory location is accessed again we ensure that the randomness for leaf node two is set accordingly so that the oblivious memory accesses like look consistent with the inputs. So specifically if alpha one is equal to alpha two we wanna ensure that the leaf node two is equal to leaf node one which is basically and then when we present the randomness we give all the leaf nodes that are associated with the second value. So the efficiency of this protocol is M polylog M and this is necessary to ensure that overall RAM complexity is overall garbled RAM complexity is T squared. So addressing challenge two is basically we can't just garble each of the step circuits independently because you have inputs from you also have inputs from the previous garbled circuits as also you're also reading the values from the memory. So before we look at that we have a high level overview of how to garble circuits for standard garbling we know that if we have CPA secure encryption schemes you can use it within the garbled circuits and then construct a two PC protocol. For CPV 17 you basically replace the garbled circuits with equivocal garbled circuits and encryption with equivocal encryption. So we need this to support adaptive security and then we follow the same framework as CPV 17 but we show that we cannot use the same primitives as CPV 17 but we need to come up with RAM efficient equivocal encryption because if we try to plug in the primitives of CPV 17 the cost increases and this is what I want to motivate for the rest of the talk of why we couldn't combine existing primitives with garbled RAM to get our final construction. So this is a very quick overview of Yaw's garbling scheme and you have key generation protocol and garbling the inputs and garbling the circuits. So when you generate the key you generate two keys for every wire and later to garble the inputs you pick one of the keys corresponding to the input wires and lastly we want to garble the circuit and this involves generating four ciphertexts for every gate such that given any two keys one for the left wire and one for the right wire you can step through the circuit and decrypt one row of these four ciphertexts and determine the key for the output wire finally and which you can translate to the output using the output translation table. So the main point here is you have four encryptions and when we look at the security we want to be able to simulate this to ensure that we can generate the fake garbled circuits and then later equivocated. So for Yaw's garbling static security we basically pick just one key for every wire and then one of the ciphertexts is generated correctly you do an encryption using the two active keys for the input wire and then later the remaining three ciphertexts are simulated we'll see what that means and finally you get a key for the output wire which you claim is the actually matches C of X. So when you look at the input keys you actually don't know what they're associated with you don't need to know the inputs to figure out the input keys and for adaptive security when you're given the input you need to show that this fake garbling that was generated is actually consistent by providing the randomness which in this case is the inactive keys as well as the randomness used in the encryption. So by standard garbling what we actually have is three rows that are simulated and one row that's encrypted using the active keys but what we actually want for adaptive security is something like this. We want to be able to decrypt the other three rows as well and figure out the inactive key or active key encrypted within it because when the adversary later so sees the input of the garbler we want to be able to equivocate the ciphertext that were generated and present the inputs like show that it is actually consistent with the inputs of the garbler. So the most important point here is this encryption algorithm needs to be aware of the circuit it has the circuit actually hard-coded in it and it uses the input and the circuit to determine the wire values within each associated with each wire. So how do we come up with such an encryption algorithm the first approach would be to use a non-committing encryption. So you can present a ciphertext and later you can give a key and open it to whatever message you want to. And this would work but the only issue is the size of the keys increases drastically when you plug it into a garbling scheme which we don't want to solve this issue CPV 17 came up with a circuit efficient equivocal encryption where instead of being able to open a ciphertext to any message you just could open it to a subset of messages and this drastically increases the efficiency because the key sizes are much smaller. So the set of messages that you can actually equivocate for are determined by a specific function. So the set of messages are actually the image space of a particular function F and if you set the function F correctly then you would be able to equivocate whatever you need. In our case, we need to equivocate the active and inactive keys according to the wire values in the circuit. So CPV 17 basically set the function to be the circuit that you want to securely compute on and when you provide the input, you can come up with the inactive keys for every encryption such that the final simulated ciphertext to decrypt to either active or inactive key as per the circuit evaluation. So the key goal here is to figure out how to set this function F. So we will look at various options for instantiating this function. As we suggested earlier we could just instantiate it with CPU step circuits which are really small but the issue with this is the CPU step circuits have a local view. They don't really capture the entire RAM program and- Two more minutes. Yeah, so they don't capture the entire RAM program and that's not really sufficient. So we want to be able to set the function F such that given the inputs and the circuit you can equivocate the values that you want to. So if you convert the RAM program into the circuit and then use that as the function F we get an improved communication cost but it's still T to the four. So the last option is we were left with using RAM programs as function F and this gave us the most efficient encryption equivocal encryption algorithm where the ciphertext size was just proportional to the T. And the resulting communication was T squared because you have order T, polylog T ciphertext throughout the gobble RAM. So quickly going over the other challenges is most of the gobble RAM techniques use PRS and we're not black box so it does not directly integrate well with adaptive security and GLO was one work that fits well and we also have a malicious security protocol that has the same complexity as the semi-honest. Last year I'd like to leave you with an open question. So currently the state of the art is we have adaptive secure protocols that have communication complexity that is quadratic in the circuit complexity or the RAM complexity. We want to understand whether this is inherent for all adaptive secure protocols or can you improve beyond this quadratic bound that we have currently in terms of communication for a constant round adaptive secure protocols. So that's it, thank you. Let's thank the speaker. We have time for a short question. Anyone has questions? Then I have a question. Is there, so there, I mean, there's like multi-party equivalents of let's say global circuits like BMR protocols. Is there any hope that one could take this approach and extend it to multi-party computation? Let's say with passive security or are there some concrete challenges that need to be overcome? So we were thinking of this and we don't see any concrete challenges, but just writing the whole technical part of it and handling all the notation for BMR and combining with Govindram seemed most challenging thing right now. Yeah, BMR as we've actually, yeah. Thank you. Okay, let's thank the speaker again.