 The next talk is Memory Delegation by Kamin Changi, El Tauman Kalai, Feng Hao Liu and Ran Raz, and Kamin will be giving the talk. So, is this working now? Not yet. I can just use... I lost the display on this laptop. This laptop seemed to go to sleep when I thought it did. It's here. Okay, maybe it's recovered. Can I use this? It's working now. Yeah, so, I'll be talking about our new model of delegation called Memory Delegation. And this is a joint work with Feng Hao Liu, El Kailai, and Ran Raz. And so this is the final talk of this session, and I know people are tired. So, I'll promise that I will not give any technical details for this talk. But what I want to do is to define this new model of Memory Delegation, and hope that by the end of the talk you'll convince that this is an important model, and our solution is useful. So, let me start with the standard model of delegation of computation. So, in this model, we have a delegator who want to delegate some work to the worker. Say, the delegator want to delegate the computation of function f on a certain input x to the worker. And so the worker does the job, compute the answer y, but the delegator also want to know that the answer is correct. So, the worker sends a proof pi, and then the delegator can verify and accept or reject the answer accordingly. And in this scenario, the following are some important properties that we care about. The most important property is the computational efficiency. So, in particular, we really want that the verification must be done faster than computation, otherwise there is no point to do delegation. And we also want the workers overhead to compute the proof to be spawned. And another important complexity we care about is the round complexity. So, can we minimize the number of interactions when we do the proof? Can the proof be non-interactive? And the third, about generality. Can we delegate? What kind of function can we delegate? Can we dedicate a general uniform function which has a small description complexity? And finally, we'll use the assumption, what assumption do we need? So, the following would be the best what we can hope for from computation delegation. So, here again, the delegator delegated the computation of f of x to the worker, and we wanted the delegation function to be a general function, as general as possible. And then the worker sends the answer and the proof. We wanted the proof to be non-interactive, or as non-interactive as possible. And we want the worker to run in a reasonable amount of time, so we allow the worker to run polynomial time. In the time complexity t of the delegation function f. Finally, the most important thing is that we only allow the delegator to run in time polynomial in n, which is the length of the input x, and it's independent of the time complexity of the function f. And as usual, we require the usual complainants and soundness, which says that if both parties follow the protocol honestly, then the delegator will get the answer. And if the worker is cheating, then the delegator will reject the wrong answer. So, previously, we already know pretty good about the solution. So, from previous work, we know how to do non-interactive proof, but it's only for low-depth functions, like the class of NC function, that can be represented by polyrhythmic depths of circuit. And we can actually achieve the general function if we are willing to do full message interaction. This is done by Universal Argument. And in the last year, we know how to achieve the best of both worlds that we can achieve non-interactive solution for general functions, and this is done by General L and a few follow-up work. However, this requires an expensive offline setup phase, and perhaps more importantly, and it's also mentioned in Afghanistan's previous talk, that here the soundness property we achieve is not as good as we like. In particular, the cheating worker cannot learn the decision bit of the delegator when the delegator delegates its input. And I'll come back to this point later, but just to give this point in mind. So, the goal of the delegation, again, is the holy grail of the delegation, is can we achieve efficient non-interactive computation dedication for general functions under reasonable assumption? And here the efficiency, optimally, we can hope for, is that the delegator runs in linear time in the input, because at least the delegator needs to send the input to the worker. We still don't know how to do this question yet, but in this talk, I want to convince you that we actually want more. So, what do we want? We want the delegator to run in sublinear time in the input lens. But why and how can this be possible? So, let's consider the cloud scenario where our data, our input data X is large and is already in the cloud, say, our Gmail, all our emails in the Google Mail account. In this scenario, the delegator may want to compute some function about his emails, say, how many emails does Bob send me last month, and the worker will send back the answer. And again, to verify, the worker will send back the proof. In order to prove before the delegator need to run over, I mean, need to have an input X to do the proof, but now we don't want to do it. Instead, we want the delegator to have a short certificate X of the input X so that he can still verify and verify in the sublinear time in the input lens. So, here I changed the input lens end to capital N to not let the input is large. So, again, so the whole point is that we want to, the delegator not only want to delegate the computation to worker, but also want to delegate his data or his memory to the worker. So, of course, the question is, can we actually do it? Can the delegator delegate the data as well and only keep a short certificate and verify the correctness of computation in sublinear time? And the answer is we can. And we'll show you two new models of delegation in such that capture this scenario, which is called the memory delegation and also the streaming delegation. And our main result is that we have a way to take the previous computation delegation model, protocol of like GKR scheme or universal argument, and we can turn it into an equally efficient scheme for memory delegation and the streaming delegation. So, now I'll define our model of delegation, the memory delegation. So, in the memory delegation, our delegator D has some initial memory X that she want to delegate to the worker. So, the delegate compute a short certificate X and send the whole memory X to the worker. And later on, when she want to compute some function about his memory, he send the function F and then the worker send back the answer Y and the proof pi. And then the delegator use the certificate to verify the proof pi and accept or reject accordingly. So, now the point is that for the efficiency, now we can hope for the delegator to run super efficiently, which means that the delegator can run in time poly logarithmic in the length of the input N and the time, the time complexity of the delegation function F. And as before, we allow the worker to have some polynomial output overhead. And also, we also allow the update operation because the memory is stored in the cloud, if we do not allow the update operation, then the solution won't be very interesting. And here, we actually allow the delegator, we actually allow a very general class of update function. We allow the delegator D to send a general update function G to the worker W. And then the worker W should apply the function G to the memory and update the memory to G of X. So this G can be very general, say, delete all the email from the barb or count how many or do other things about your email. And here, the one issue is that the delegator also need to update his certificate. But since the delegator don't have the memory, so we allow the worker to send some update information to help the delegator to update his memory. And so he tried to update the memory using the update information, and he may fail and all success, then he'll accept or reject accordingly. And again, the main point here is that this, again, need to be done in super efficient way. Namely, the delegator can only run in polylogarismic time in the input length N and the time complexity of the update function G. And the worker can run in polynomial time as before. So what's the desired property of the memory delegation model? Again, let me emphasize again about efficiency. The delegator need to run in polylogarismic time and the worker runs in polytime. And also we want to compute this and soundness. And here, the soundness gets a big trick here because the delegator and the worker will run the compute and update function multiple time, and each time the delegator will accept or reject. And as mentioned before, there's an issue of the worker learns the decision bit of the delegator. And the nice thing about our solution is that we can actually achieve a reusable soundness where the worker can learn the decision of the delegator. So the soundness guarantees that when the delegator interacts with a cheating worker, where the worker is allowed to choose the input of the delegator and is allowed to learn the decision bit of the delegator. Still, in this case, the worker cannot cheat. The worker cannot convince the delegator to accept incorrect output. And let me stress a bit more about this usability. So at high level, the issue is that because the delegator uses a short certificate about X to compute his decision, so this decision bit must contain some leakage information about the certificate itself. And this leakage bit is per input. For our memory delegation scheme, we actually have a public certificate. The certificate does not need to be secret. So this leakage does not bother us. So to achieve real usability is actually simple. However, for our streaming delegation scheme, which I will briefly mention shortly, our scheme does require the certificate to be secret. And this problem of leakage becomes quite challenging. In particular, because the cheating worker has the flexibility of choosing what kind of leakage by choosing different cheating strategy. And indeed, to handle this question, we take ideas from the continual leakage model and we prove two new lemmas to handle this issue. Unfortunately, I don't have time to tell you who this lemma is, but I'll just refer you to our paper. So let's go back to state-down result. So we construct two memory delegation schemes which are both efficient in the sense that the delegator runs in polylogismic time and the worker runs in polynomial time. And as said before, we base our construction on the previous known computation delegation scheme. So we achieve the same ability. So we know how to do non-interactive proof for low-depth delegation functions. And we know how to achieve delegating general functions with full message interaction. And our scheme has the same assumption as previous scheme. And next, I will turn into our second model of streaming delegation. I won't have time, so I'll just briefly talk about this scheme by the following example. So let's consider an example where we have a streaming of some stock ticks. So the stock goes up by 0.1 and goes down, goes up, goes down. Fast. And we have an investor here staring at the streaming and trying to decide when to buy the stock. And this, our investor is a computer scientist, so she wants to do it smartly. She wants to use some complicated trading function and she don't want to do it by herself. So she delegates the trading function, computational trading function by some worker, online worker. And so as before, the worker will store the data and our investor to verify the correctness of the output from the worker, the investor need to keep a short certificate of the data stream. And this is different from our memory delegation scheme model because now the data stream arrives constantly at the high rate, as you can see from the previous animation. And so ideally, the delegator should update the certificate by herself or by himself. And luckily, we can do so because now the update is very simple. Each update simply append one more data item from our data stream. And this is very different from the memory delegation scheme where in the memory delegation scheme we allow the update to be a general update function. But on the other hand, we allow the delegator to get help from our worker to help to update the certificate. So as a result, as a result, we can also construct a streaming delegation scheme. And again, based on the previous work, so we can achieve interactive proof for low depth functions and we can delegate general functions with full message interaction. But here, for the computation that I mentioned before, we need to use a stronger assumption of for the homomorphic encryption scheme. So this actually ends up my talk. So for conclusion, we construct efficient memory and the streaming delegation scheme again, it's non-interactive proof for low depth function and the full message proof for delegating general functions. So we think that what's left the most interesting open question is to achieve the holy grail for both computation memory and streaming delegation scheme. Namely, we want to have an efficient delegation scheme which is non-interactive and allow to delegate general function. So that's it. Thank you. We have time for one question. Let's thank the speaker again and our speakers for the question.