 Okay, thank you, Manos. Hello, everybody. I'm gonna talk today about Turam in your protocol for oblivious RAM in two rounds. John worked with Tanzam and Payman. So I'm gonna start with the motivation. So the main motivation for RAM is private cloud storage. And so the main problem we have there is that we like to store our files in the cloud without revealing any information, right? So okay, that's easy. So our solution we can have is we can encrypt our files with the CPS secure scheme. So that protects the content of the file, but unfortunately there's not enough. We want to achieve a more ambitious goal. So we want to, if we observe which files are being accessed, this will reveal anything about the corners of the file. Okay, so we like to hide the access pattern where we're accessing the files. Okay, and this is what oblivious RAM gives you. Okay, so oblivious RAM basically is a way to access a collection of files in a way that the access pattern is not revealed. Okay. So let me give an informal definition of oblivious RAM. So we have an array of N entries. And so what we want to have now is we want to access one entry out of this array, access X. Okay, so I'm gonna use this ORAM protocol to access this entry X. So this ORAM protocol is gonna start accessing a bunch of entries and then it's gonna output A of X. Okay, so this is the real world. So in the ideal world we have, again, we want to access X, but we don't give this X to the simulator, which is the simulator only gets to see the size of the array, right? And then he gets to perform some accesses in this array and it outputs something. So now, if the adversary observes just this access and the access made by the simulator, he will soon be able to distinguish which world he is in. Okay, so this is an informal definition of ORAM. So something about existing work. So before, so a naive way to solve the problem is to just access everything. So this is basically to access one block, you need to access like a linear number of blocks so this is not good. So in 1990, by the work of Raphael Ostrowski, we saw that we can do that in logarithmic overhead. So if you access like, to access obliviously one block, you just need to access a polylogarithmic number of blocks which was pretty cool. That created a sequence of a lot of works based on this hierarchical framework that was introduced by Raphael. And there's a lot of papers since then. There was another framework that was introduced in 2011 in Asia-Crete by C. Sanstifanov and Lee which is a tree-based framework, okay? And this is a little, it's phenomenal different than the hierarchical one and again, there were a lot of papers since then. So tree-based approaches for oblivious RAM seem to be leading to more practical implementations. So in particular, with a tree-based approach with ORAM, there's no need for de-amortization of worst case costs. So everything is polylogarithmic in the worst case. You can observe up to 100x speed up over a hierarchical process and there was actually a zone that you can also implement it in hardware and they like secure oblivious RAM professor was being built called Phantom, okay? So in essence, I mean, seems that the tree-based framework is more practical than a hierarchical one. Okay, so this work is about non-interactive tree-based ORAM, okay? So the problem is that most oblivious RAMs require interaction and typically polylogarithmic number of rounds. In particular, when you do an access, you're required to download, decrypt, do some computation, re-encrypt, upload a number of times, okay? So this creates, naturally, this creates some interaction, okay? It's not, ours is not the first one that does non-interactive ORAM. In particular, it has been shown how you can turn hierarchical ORAMs to non-interactive by William Sensei in 2012 and by Leonowski in 2013. Since they follow the hierarchical framework, the costs are linear in the worst case, but you can de-amortize them and make them polylogarithmic in the worst case. And you can also have non-interactive ORAM by just doing garbled RAM directly, but this will have a big cost, okay? So motivated by the fact that tree-based ORAMs seem to be more practical than hierarchical ORAMs, we want to build a non-interactive tree-based ORAM. Our worst-case costs are polylogarithmic, so there's no need for de-amortization and we use the path ORAM design, okay? So subsequent to our work, there was also a non-interactive ORAM based on the tree-based framework, bucket ORAM. That's because we're similar and ours, I think, is a bit similar. So let me describe our approach now. I will start with some path ORAM basics. So the goal is that we have here eight blocks, okay? And we want to access them obliviously. The way we're gonna do it is we're gonna distribute it at random in a tree of eight leaves, okay? So we just throw these blocks in this tree that can go into any node, okay? And now we're gonna keep a position map where it's gonna tell us, look, if you want to access block one, you should start from leaf three and go up the root. And eventually, on this path, you're gonna find this block. Okay, so you want to access block four, you should start from leaf eight, go up, and you're gonna find this block, okay? So this is what the position map tells you, okay? So if you want to read, for example, block four now, what's gonna happen is you're gonna read this path. Now you're gonna download this path, right, locally. You're gonna decrypt it and you're gonna read the information. Now the problem is, when you want to write it back, you need to make sure that you change, you change the position of block four because you don't want to read the same position when you read block four again, this will break security. So what you do is you pick another random number and you assign block four to leaf three, okay? And you rewrite this path again. So now block four went up to the root and it can be accessed by just going up from leaf three. I'm just talking about existing path around now, just, okay? So that's how it works. I mean, this is a very simple version of it. Now, so there's notion of recursion. So this is linear space, so it could go up like this, but then we're gonna have to store linear space, so you have to actually do a norm on the position map. So the way to do that is to view this position map as an array of four entries instead of eight, okay? And now you're gonna put that in a different oram, okay? So you're gonna, this requires a position map of four entries itself. So you store this data, the actual data on another tree and now you have another position map, okay? So, okay, so now let's see how it works. If you want to read block seven, you're gonna go to the position map you have now and you say, okay, so seven should fall into the fourth bucket. So I'm gonna be accessing the fourth leaf of the first tree, okay? So you go there, okay? So you access this fourth leaf. You start from the root, going down, okay? You access the fourth leaf and all of a sudden at the second level you find the entry that corresponds to seven, okay? So you pick that entry, so this is the entry here and this has two values. The left value is for seven and the right value is for eight, okay? So that means that you need to access leaf six in the other tree. So that's how you continue the execution. So you go down this other tree and you're guaranteed to find actual, the value for leaf seven. So this is G, okay? So you have a position map for the first tree and this tree stores the position map of the second tree, okay? So now what we observe now, the interaction comes from the following thing, right? So you need to access the first tree, decrypt the information, figure out what is the next leaf of the other tree, right? And then access the other tree, okay? So that creates the interaction. So how can we avoid it? Okay, so let's abstract the problem out a little bit and let's consider like a more abstract problem that captures what I talked about before. So the abstract problem is imagine you have a sequence of trees, of binary trees, okay? And then every node of this binary tree is some computation, right? So it stores some data and it has some logic, takes some input and it has some output, okay? So now what you want to do is you have an encrypted input that you give it at the first root, okay? And now you also give it a path, a path number, saying that, okay, you should go down the second path, okay? So you execute the computation here, you provide another output for this node here and then as you go down this path, you're guaranteed to output the path to be followed in the next tree, okay? So the computation along this path will give you the path to follow in the next tree. And then the computation along this path will give you the path to follow in the next tree and the computation along this path will give you the path to follow in the next tree and eventually by following the last path, the output of the computation will be speeded out by the last leaf. Okay, so this is basically what's happening with 3ORAM. Now, with 3ORAM, with path around the data and logic is basically, you know, you're just trying to check if this encrypted input matches the data that are stored in its node. And once it matches, there you can figure out the path to follow at the next tree. Okay, so we want to solve this problem without interaction, right? Now, this requires interaction because we have to download the paths decrypt around the computation ourselves and then continue the computation from the next tree. Okay, and we only want to leak the public inputs, two, one, and six, which are the paths that are being accessed. We don't want to leak information about the encrypted input we give in the beginning. You can think of this as a encrypted input as the original index we would like to access. Okay, so if we solve this problem, we can solve, we can do non-interactive 3ORAM, path around. Okay, so let's try to see how we can solve it. Let's try fully homomorphic encryption, it's very powerful. So what we can do is we can encrypt the input with a fully homomorphic encryption scheme, right? And then we have the public input which is the path we need to go down. You run the computation here with a fully homomorphic encryption input. It gives you an output, you run it again here, but the output of the fully homomorphic encryption scheme is always encrypted, right? So we expect that this computation will give you the next path to access. So yes, it is going to give it to you, but it's gonna be encrypted, so it's useless. So the server cannot use it, right? Because fully homomorphic encryption will output the next path to be accessed encrypted. So again, to continue, you'll have to send it back, decrypt it, and say, okay, continue the execution from this path. So fully homomorphic encryption will not work, okay? So one way to do it is remember we had this computation here, we have this computation here. So one way to do it is we're gonna start computing garbled circuits for this computation on every node of the tree. Remember, it's node in this tree stores different data, so the garbled circuit is gonna be different, right? So we need to compute different garbled circuits for every node of the tree. And it's very important to compute them bottom up, right? And I'm gonna say why. So you compute the garbled circuits of the previous circuit that I sent, I showed it to you, for the leaves of this tree, for this nodes and so on and so forth, and you continue like this, right? So okay, so say we have computed garbled circuits, we have sent the garbled circuits, and I want to do the execution. So what's gonna happen is we're gonna send the garbled input for the first route, okay? That's gonna execute, right? But then it's gonna output something, either an encryption of the result of the out or not, but then it's not gonna output the garbled inputs for the next thing, right? We want garbled inputs for the next thing. So one way to do it is to send the result back, compute new garbled inputs, and execute the second circuit, okay? So that's not gonna work. So we need to change the circuit that we garbled but it's not a little bit. So the way to do it is that was our original circuit that we used to garble, so it had input and output. Now what we're gonna do is we're gonna hard code in this circuit the garbled labels of the left circuit and the garbled labels of the right circuit. These are not garbled inputs, these are garbled labels. Okay, and now we're gonna also give us input to the circuit, the path that is being followed. Okay, so given the path, the circuit now can figure out whether I'm going left or right. Okay, so it can pick basically one of the garbled, one of the garbled labels, either left or right. And then it can combine it with the out of the logic and it can instantiate these garbled labels, turning them from garbled labels to the garbled inputs. Right, and then it's gonna output the actual next garbled input. Right, so there's no interaction now, we just hard code the garbled labels of the next circuits to our current node and then the circuit, the next circuit is ready to execute. Okay, and then we also need to be careful because when we get at the leaf, we don't have left and right, so we have to put out so the garbled labels of the next route. Right, to do the computation on the encrypted tree. Okay, so let's see now how the execution will look like. So we have the client that has the first path in the index that he wants to access locally, right? It computes a garbled input of these two values for the route of the first tree. This first tree executes and it outputs garbled inputs for the right, for its right child. Now the garbled inputs are used to execute this last circuit, it outputs the garbled inputs for the next route, right? And then it continues outputs the garbled inputs of the left, the garbled inputs of the left, and eventually it helps an encryption of the index of the actual value we're looking for, okay? So this is how the basic scheme works, but now there's a problem, right? Now remember, now I only have here two trees, for this tree we had to encode the garbled labels of the next route to every node of this tree, okay? And of course, whenever we access a garbled circuit, it cannot be used anymore, so we have to refresh everything, right? So if we refresh the garbled circuit of this route, since its garbled labels are hard coded in every garbled circuit of this tree, we have to refresh every garbled circuits of this tree, which means that the access complexity is linear, because we have to refresh all these garbled circuits that are hard, because they hard code the garbled labels of all the routes, right? So that's not good. And the way to solve it is basically to, instead of hard coding the garbled labels of the next route, you just pass them as input, right? So the idea is, okay, initially I used to compute a garbled input for the first route by having the first path in the index. Now I'm gonna compute a garbled input for the first route by having as input the first path, the index, and the garbled labels of the second route. So by having all this tree, I'm gonna compute a garbled input of the first route, and then I'm gonna keep passing down these garbled labels, right? I'm gonna do the computation, but here, whenever I will have now to instantiate the garbled labels of the next route, I don't need to have them hard coded because I just get them as input, okay? And then I do the same here, note now that the client will also need to give us input the garbled labels of the third route, which doesn't exist here because of the picture, and then eventually you get to output the actual encrypted value. Okay, so this basically achieves, you can refresh everything now in polylogarithmic time, okay? You don't have to do this linear time computation. So the cost of our approach is exactly the cost of path ORAM times K, where K is the security parameter, and this comes from the fact that we have to keep this garbled circuit per node, okay? So for example, when path ORAM downloads a block of B bits, we need to download this block hard coded in a garbled circuit, therefore this is K times B, okay? And one of the problem is to remove this multiplicative K, have like an interactive ORAM without this multiplicative K. I don't know if this is even possible. Okay, so this is the basic scheme. So what I present so far is an abstraction of how path ORAM works, and I solved this problem, non-interactive computation on this sequence of trees, and then I saw that this applies to ORAM, okay? So let's talk about the application of this thing, of this non-interactive ORAM to searchable encryption. So let me remind you a little bit what searchable encryption is. So searchable encryption, we can store a set of NWAD pairs in an encrypted form, and given an encryption of keyword W, we want to return all IDs matching W, and all existing approaches leak in the deterministic function of W, which we call search pattern, okay? So we saw how to consider this pattern without using fully fledged ORAM, mainly D times log in rounds for search. We use four rounds instead, and our scheme can be potentially practical, okay? So again, I'm gonna abstract things a little bit, okay? So we have the notion of adaptive computation. You have a memory, you give an input X, and what happens is you execute instructions to one at a time, okay? This is adaptive computation. You cannot execute things in parallel. So this is one class of computations, and if you just use a normal ORAM for that, you need TN times log in rounds, right? Where TN is the computation. Now, if you use two RAM for that, you can do it with TN rounds because you need two rounds per axis, okay? This just follows from the previous, right? So now, if you have a non-adaptive computation, here's what's happening, right? So you have an input X, you get the result out, and based on this result, you can compute some function F of Y, and then you can figure out in parallel everything that you can access, okay? So let's see how we can make this computation oblivious. So if you use an interactive ORAM, you need log in rounds for the first small array, and you need log in rounds for the big array. If you assume that you have an ORAM that can send requests in parallel, right? Okay, so now, if you use two RAM, you can do the first array in two rounds, and the last array in log in rounds, okay? But now, we want to have four rounds, right? So, okay, one could ask me, yes, it's easy to get four rounds. Why don't you use two RAM for the big array? Well, you cannot use two RAM for parallel accesses because you need to refresh garbled circuits first, and then execute the next thing, right? So two RAM cannot be used to execute in parallel. Yes. So basically, the idea is to store the small array in two RAM, and then to store the big array in path ORAM, but without the position map, right? And then the position map is computed as this, using this random function. And then you can do basically everything in parallel, okay? And searchable encryption is an unadapted computation because a small array stores W and the number of keywords or documents mapping to it, and the big array stores all this W, ID. Okay, so to conclude, I presented two RAM. It's a first non-interactive tree-based ORAM. Our cost is K times path ORAM, and presented its application searchable encryption. It's a four-round searchable encryption, and potentially practical since we only use two RAM for an array of size equal to the number of unique keywords, and for the rest, we use just the path ORAM without the position map. Okay, thank you very much.