 Hi everyone, I'm Akshay Ram. I'm going to talk about adapter Garbled RAM from Laconic Oblivious Transfer. This is based on joint work with Sanjam Gargan-Rafael Ostravasky. Let me start this talk by giving you a brief introduction to Garbled RAM, which was introduced by Lu and Ostravasky in 2013. In this setting, there is a user who has a large database D and he wishes to store this database on the Cloud. So first he encrypts the database and sends the encrypted database to the Cloud. Later, he wishes to run a RAM program, P, that has read-write access to the database, and he also chooses an input X on which the program has to be run. So he first garbles the program and the input to obtain p tilde and x tilde, and sends the garbled versions to the Cloud. The Cloud now evaluates the garbled program on the encrypted database to learn the output. Garbled RAM requires that the size of the garbled program p tilde to only grow with the running time of the original program times some polynomial factor in the security parameter and some polylogarithmic factor in the size of the database. Similarly, it also requires that the size of the garbled input to only grow with the size of the original input times some polynomial factor in the security parameter and some polylogarithmic factor in the size of the database. For security, we We require that the cloud provider, who gets the garbled program, p tilde, x tilde, and the encrypted database, to only learn the output of the program and everything else about the database, the program, and the input is hidden. We can also consider the persistent setting, wherein the user could choose multiple programs and input pairs, and we want the changes that are made to the database to be persistent across these executions. Long line of work, starting with the original work of Lou and Ostrowski, has now given us a very efficient construction of garbled RAM from the basic assumption that one-way functions exist. So the security notion that has been considered in all these works is called as selective security. And in this talk, we'll be interested in the stronger notion of adaptive security. So the adaptive security is modeled as a game between an adversary and a challenger. The adversary is provided with the encrypted database, and this database could itself be adversely chosen. And the adversary now gives a RAM program, p, that it wants to evaluate and gets the garbled program, p tilde. Later, depending on this garbled program, p tilde, it might adaptively choose an input text that it wants to evaluate on and gets the garbled input text tilde. So as far as efficiency is concerned, we need the same properties as in the previous setting. And for security, we require that the adversary cannot distinguish between the case where p tilde and x tilde is generated honestly and the case where p tilde and x tilde is generated by a simulator who just learns the output of the program. Unlike the selective setting, construction satisfying the stronger notion of adaptive security have been harder to obtain. And all the prior constructions are either in the random oracle model are based on extremely strong assumptions such as indistinguishability of fiscation. So the main question that we are interested in is, can we construct an adaptive garbled RAM from standard assumptions? A natural question that would arise is that why is adaptive security important? And why do we care about obtaining adaptive garbled RAM from standard assumptions? So the main motivation for studying adaptive garbled RAM comes from the sequence of works on studying adaptive garbled circuits. And adaptive garbled circuits has several interesting applications such as one-time programs, online offline two-party computation, verifiable computation, and even in constructing adaptive compact functional encryption. In addition to these applications, adaptive garbled RAM has several interesting consequences to MPC for RAM programs. In particular, combining an adaptive garbled RAM scheme with a constant-trone malicious MPC for circuits, one can obtain a constant-trone malicious MPC for RAM programs in the persistent setting. And prior constant-trone protocols based on standard assumptions could not support persistent in the malicious setting. And hence, constructing an adaptive garbled RAM from standard assumptions would readily give a solution to this problem. Thus, adaptive garbled RAM is a well-motivated question. So let me now say about the results that we have in the paper. The main result is that we have a construction of adaptive garbled RAM from laconic-oblivious transfer. So laconic-oblivious transfer was introduced by Cho et al. in the crypto last year. And it has already found several interesting applications. And a direct corollary, which we can obtain by plugging in the known constructions of laconic OT, is that we can obtain a construction of adaptive garbled RAM based on standard number theoretic assumptions, such as computational Diffie-Hellman factoring, or even lattice-based assumptions, such as LWE. As a consequence, we also obtain the first constant-trone MPC for RAM programs in the malicious setting that supports persistence. So in the rest of the talk, I will first give you a brief introduction to the construction of adaptive garbled circuits, which appeared in EuroCrip this year. And this would be the starting point of our work. I'll then tell you about the challenges in extending this work to the RAM setting. And finally, I will tell you how to overcome these challenges and the main tools and techniques that we use in overcoming them. So let me first start the brief overview of the construction of adaptive garbled circuits. So the starting point of this work is to view a Boolean circuit in an alternative form. So to give you an example, let me consider a very simple Boolean circuit that just has three gates, G1, G2, and G3, and takes in three inputs, which is X1, X2, and X3. So this Boolean circuit is viewed as a sequence of three step circuits, along with a database or a memory. So this database is initially loaded with the input X1, X2, and X3. The first step circuit reads X1 and X2 from the database, computes the output of the gate G1, and writes it back to the database. Similarly, the second circuit reads X2 and X3, computes the output of gate G2, and writes it back. And the final step circuit reads G1 and G2, computes G3, and writes it back. So one can easily see that any Boolean circuit can be viewed in this form. So to garble this Boolean circuit, it is sufficient to garble the database and garble the step circuits. So let me first explain how do the step circuits are garbled in the previous work. So these step circuits are garbled using a selectively secure garbling scheme, say Yau's garbling scheme. So now the main question is that, how do these garbled step circuits access the database? In other words, we need a mapping from the contents of the database to the labels of these garbled circuits. And the main tool that was used in the previous work was a laconic oblivious transfer. So let me first explain what is meant by a laconic oblivious transfer, and something which is stronger, which is called as updateable laconic oblivious transfer. And I'll tell you how this is useful in enabling the step circuits to access the database. So updateable laconic oblivious transfer as I mentioned before introduced by Joe et al in 2017, and it consists of three main algorithms. So the first algorithm is a hash algorithm which takes in a large database D and hashes into a small digest H. And one can think about this hash as some sort of a Merkel hash. So the second algorithm is called as the read algorithm that takes in this hash value H, an index I of the database, and two messages M0 and M1, and outputs a read ciphertext CR. And this read comes along with an evaluation read algorithm that has read write access to the database. It takes in the read ciphertext CR, and outputs the message corresponding to the bit in the location I of the database. So namely, if the bit in location I is zero, then one can obtain M0, otherwise one can obtain M1. So this is similar to OT, and that's why it's called as a laconic OT. So the laconic aspect is because of the efficiency considerations, namely the running time of the read algorithm and the evaluation read algorithm grows only poly logarithmically in the size of the database and it's otherwise independent of the size of the database. So the read ciphertext, the size of the read ciphertext is also poly logarithmic in the size of the database. So this is the second algorithm read and the final algorithm is a write algorithm which takes in a hash value H, an index I, and a bit B that has to be written to this index I and outputs an updated hash value. So this updated hash value is the hash of a new database D prime, which is same as that of D, except that the location I is overwritten with this bit B. So there is just one change when compared to D and D prime, just in the location I. So these are the three algorithms and a sequence of words, starting with the original work of Joe et al, gives us constructions of updatable laconic oblivious transfer from assumptions such as CDH factoring and LW. So let me now tell you on how this primitive is actually useful in enabling the garble step circuits to access the database. So initially the database is loaded with the input and the first step circuit takes in the hash of this initial database. It has to read X1 and X2, right? So it reads those two values using the read function of the laconic OT. It computes the output of the gate G1. It writes it back and as I had mentioned before the write function outputs an updated hash value of this new database which is H prime. And this H prime is passed on to the next step circuit. So this process continues. In the previous work which appeared in EuroCrip this year it was shown that this construction satisfies adaptive security but for the circuit setting. However, we are interested in the RAM setting and there are several challenges that arises in the RAM setting. So the first challenge is that how to protect the contents of the database. In the case of circuits, the locations that are accessed by the database are accessed by each step circuit are fixed a priori and this makes the task of protecting the contents of the database a straightforward task. In fact, a simple one time pad was used to protect the contents. Whereas in the RAM setting, the locations that are accessed by each step circuit are chosen dynamically and this makes the task of protecting the database a highly non-trivial task. So this is the first challenge and the second challenge is that how to protect the access pattern. So in the RAM setting, the access pattern reveals a lot of non-trivial information and we need to protect it. So in the parlance of garbled RAM, so whatever we can obtain from the adaptive garbled RAM circuit's paper is a construction of adaptive garbled RAM which satisfies a security notion called as unprotected memory access. Namely, the contents of the database and the access pattern is revealed in the clear and only thing that is not revealed is the internals of the step circuits and the input. And in the selective security setting, there are known transformations from the work of Gentry et al. That transforms a garbled RAM satisfying unprotected memory access to full security and this is done by our ORAM scheme and a symmetric encryption. Unfortunately, this transformation does not work in the adaptive setting and in the adaptive setting, we need more sophisticated tools to solve these two challenges. So in the rest of the talk, I'll focus on the first challenge on how to protect the contents of the database and the second challenge is solved using a specialized ORAM scheme. So we cannot use any ORAM scheme. We need to use a specialized ORAM scheme and I would encourage you to look into the paper for the details. So in the rest of the talk, let me focus only on protecting the contents of the database. So before I tell you our solution, let me explain why the prior approaches are not sufficient for our purpose. So the prior approaches use some sort of a location-based encryption scheme. So for example, the content of the database in location one is masked using a pseudo-random function evaluated at one and this PRF key is hardwired in each of the step circuits. So this PRF key is used for unmasking the red value as well as generating the mask for the value that is going to be written. However, this is not sufficient for our purposes and to see why it's not sufficient, we need to go a little bit into the details of how the adaptive security was proved in our previous work. So in the real world, we have all the step circuits which are being garbled as per the specification of the construction and in the ideal world, we have all the step circuits which are just dummy step circuits which write some junk value to the database. So we need to go from the real world to the ideal world and this is done via hybrid argument. So the sequence of hybrids which was used in the previous work was as follows. So in the first hybrid, we change the last step circuit to a dummy step circuit. Next, we change the second last to a dummy step circuit and so on until we get to the ideal world configuration. So this sequence of hybrids wherein we change the last step circuit to a dummy step circuit first is actually extremely crucial for obtaining adaptive security with the required efficiency parameters. However, this sequence of hybrids is in direct conflict with the location-based encryption scheme. So let's consider the first hybrid change. So we need to change the last step circuit to a dummy step circuit, meaning that we need to change the value that is being written by it to some junk value. So intuitively, we need to use the security of the location-based encryption scheme. However, in order to use the security, we need to get rid of this PRF key which is being hardwired in all the previous step circuits. And this does not seem possible unless we rely on some complex circularity assumptions. And even puncturing does not seem to help as the complex dependencies which arise by this chaining of these gavel circuits affects the efficiency. So there is a need to develop new tools in order to solve this problem. So the main tool that we develop in this work is that of a timed encryption scheme. So in particular, instead of considering a location-based encryption, we consider a time-based encryption scheme. So let me first explain what is meant by a timed encryption scheme. And let me then explain how is it useful in overcoming the challenge with a location-based encryption. So a timed encryption is just like any other symmetric encryption, except that the encryption algorithm takes an additional timestamp. So every ciphertext is generated with respect to a timestamp. There is an additional key constraint algorithm that takes in a timestamp and a key and outputs a time-constrained key. So the decryption algorithm takes in a time-constrained key and a ciphertext and outputs the message as long as the timestamp that is used for constraining the key is greater than or equal to the timestamp to which the ciphertext was generated. That is, you can use a time-constrained key to decrypt a ciphertext, which is generated with respect to some timestamp which is in the past. The security property states that given the time-constrained key, a ciphertext which is generated with respect to some future timestamp still has semantic security. So namely, you cannot distinguish between ciphertext of our original message with a ciphertext of some junk value. So in this work, we give a construction of timed encryption by a simple modification of the GGMPRF. So let me now say how is timed encryption used in the construction. So the initial contents of the database, namely, let's say the first location is generated with respect to the timestamp zero. So the ciphertext are generated with respect to timestamp zero. And whenever ith step-circuit has to write to some location, it generates a ciphertext with respect to the timestamp i. So ith step-circuit generates a ciphertext with respect to timestamp i. And furthermore, the ith step-circuit has the key that has been time-constrained with respect to i hardware. So to see how this is actually correct, let's consider the third step-circuit which has the constraint key k3 hardware. By correctness of this time-constrained encryption, a timed encryption scheme, we know that this constraint key can be used to decrypt all ciphertext that has been generated with respect to timestamp zero, one, or two. Namely, it can decrypt the ciphertext which are the initial contents of the database, as well as those which have been written by the step-circuits that have just before this one. So correctness is preserved, and let me explain how does this help us in proving the high, that is in enabling us to make the hybrid change. So let me consider the first hybrid as before. Namely, we want to change the last step-circuit to a dummy step-circuit. That is, we need to change the value that is being written by the step-circuit to some junk value. So for this purpose, we can directly use the security of a timed encryption because the security of timed encryption guarantees that even given the time-constrained keys K1 to K8, the encryption which is generated with respect to the timestamp nine has semantic security, and this is what is generated by the last step-circuit. So this is how we overcome the challenge with respect to the location-based encryption. So as I had mentioned before, to protect the access pattern, we need to use a specialized ORAM scheme, and I wouldn't have time to go into the details. So I would encourage you to look into our people. So to conclude, we gave a construction of adaptive garbled RAM from standard assumptions such as CDH, factoring, or learning with errors. And as a consequence, we also obtained the first constant-run malicious NPC for RAM programs in the persistent setting from standard assumptions. And the major open problem is that can we remove public key assumptions from the construction of adaptive garbled RAM? So in the selective setting, we know construction from one-way functions, but in the adaptive setting, we seem to be requiring public key assumptions. And that's it. Thank you for your attention.