 Hello. This is a joint work with Helger Lippma, who is currently associated with Thibonetica and Tallinn University. And I'm going to talk mostly about consistent computation. That is, I'm first going to tell you what is the model of consistent computations, and then I'm going to give you two examples of consistent computations, namely consistent oblidist transfer and consistent conditional disclosure secrets. And after that, I will talk about some theoretical results between those security notions. Consistent computations is not as secure as full security in the malicious model, and as usual, the motivation for doing less secure or relaxed security models is following. If you start from a semi-honest model, you get a really decent protocol runs very efficiently. Then you apply certified computation techniques such as a GMV compiler, and you get a protocol, which is secure in a malicious model, but it has high communication complexity and high computational complexity. Now if you use PCP theorem together with sublinear oblivious transfer, then you can reduce the communication complexity, but the resulting protocol has even higher computational complexity. So for many practical protocols, it's infeasible. So the idea is that you should use somewhat intermediate security notions, which provide you better efficiency guarantees. For instance, you can consider input private protocols, where at the end of the protocol, immediately adversaries are guaranteed not to learn anything about the inputs of honest parties. Of course, but it can arbitrarily change the output of honest parties, and if sometimes you can get to know the output of honest parties, then you might afterwards find out something about inputs as well. But what is consistent computation is enhancement of input private protocols, and essentially what we do is that we put in the fraud detection mechanism, which assures that when honest party gets an answer, this is the correct answer. How did we arrive there? To this point, we looked which properties we kind of can achieve easily, and which can be sacrificed. It is relatively easy to achieve input privacy. There are two well-known transformations. The first one is by Aiello, Ischai and Raygold. And it works essentially for all cryptocomputing protocols, which are based on LDAMAL. Okay, this Laurent Lipma's paper extends this for the Yang cryptosystem. Now, if you want to add consistency, mainly we want to add a kind of checking mechanism that honest party detects when its output is tampered. If you put universal check, which kind of warns you if your output might have been potentially tampered, then you get into the standard security model, and this is inefficient. So what we do is that we consider a setting where you get an warning that your answer is not correct, but this might depend on your input. Namely adversary might be able to cause selective protocol faults. But if you accept the answer, this is correct. And finally, what is nice about consistent computations is that you can issue fault complaints to third parties and show that indeed somebody acted maliciously. So how should we formalize it? We do it using this ideal world versus real world paradigm. Okay, and if you consider standard two-party security model, then it follows as usual both parties send our inputs to the trusted third party, who computes the outputs, then sends one to the dominant party, who can then decide whether the protocol is aborted or proceeded, and if it's proceeded, then the other party gets the answer. Now, we add only one small step to this model, which makes the trick. And this is the following, that after inputs have been submitted, a malicious party can send a halting predicate to trusted third party, and if this predicate holds, then trusted third party doesn't send the answers to the honest parties. But it continues as follows, sends back the answer to the malicious party, and okay, this party can still decide whether to abort or proceed. And of course, predicate must be efficiently computable, and there is interesting thing, you can see that the protocol failure is not directly observable by the adversary. Namely, if adversary sits in the ideal world, he sees the same things when the protocol kind of ended with failure for the honest party, and if it didn't. So, the adversary can find out whether failure happened only if in the following post-processing context, the honest party either complains or somehow it turns out that it didn't get the answer. Okay, and as you can see, complaint handling comes for free. Why is that? It's a trivial observation, if a protocol is consistent and correct, then if all parties are semi-honest, then you always get an answer. So now if you sign all the protocol messages, and the honest party gets its replies and computes, and doesn't get an aborting value at the end of the protocol, then it can go to other parties, reveals input, randomness, and all received messages, and then the third party can compute whether honest party indeed followed the protocol. Of course, this is not good because usually you would like to hide the input. So what we do is that essentially if the protocol messages do not leak information about the input, then it's okay. But if they leak, we can encrypt the messages. So if protocol messages are signed and encrypt, then what the honest party can do, it can come up with a zero-knowledge proof, which says that he followed the protocol and obtained the failing output. And this is a valid complaint which says that somebody acted maliciously during the protocol execution. And why is it better if you consider a client-server protocol than client usually does very few computations. And for that client, it might be feasible to construct a zero-knowledge proof that he actually acted correctly and obtained the failing output. Whereas for the server, this might be practically infeasible. So we are not the only one to propose such types of models. There are models for covert adversaries by Auman and Lindel, and there is a K leakage model. And it's important to note that this Auman and Lindel model doesn't guarantee input privacy, namely with non-negligible probability in the ideal world, the adversary might get the inputs of an honest party. Whereas K leakage model gives you up to K bits for the adversary. In our case, the information leakage in the protocol is zero bits, but of course in the post-processing context, the information can leak. Now what about complaint handling? If you look at the ideal implementations of the covered model, then you see that complaint handling there is impossible. It's possible for the K leakage model and it's also impossible for input private models. Okay, and let's see now a practical protocol, which kind of shows you that this notion is reasonable, it's secure enough, and it actually achieves enough efficiency. So when we started to write this article, our motivation was that, okay, there are input private oblivious transfer protocols with optimal communication complexity, namely O log N, where N is the number of database elements. And the question, natural question is, can we extend this result to get kind of security guarantees against malicious servers? And if we can, so how many rounds the protocol, the new protocol would have, and can we achieve optimal communication complexity? And there was this, yet another motivation was that if we could do it in two rounds, then we could kind of get reduced round counting PCP proofs, but we didn't get it in two rounds. But if you are more practically inclined, then the question is, can you build oblivious transfer protocol where you can detect the server chips, but still the protocol doesn't contain zero knowledge proofs. So the construction, as you can see, is surprisingly simple. So let's see what is there. In order to, okay, there is this sender and receiver, sorry, a receiver, a client and server, and the server has a database, and the server will just commit all database elements, send those to the client, and after that you use oblivious transfer to fetch the decommittment range. Okay, and usually you would like to formalize with trusted setup phase where you generate the parameters for the commitment scheme and for the oblivious transfer protocols. Now, as I said, we wanted to have minimal communication, so in order to achieve that, we must use, instead of ordinary commitments, list commitments, which are just as an ordinary commitment, but in the way that the commitment touches this compact value, which is sent to the client, client is small, and it must be possible to decommit individual elements, and the decommittment values must be small, and the commitment must remain biting and hiding, namely, if you reveal one element, then the others must still remain hidden, and what is good is it is straightforward to construct communication-efficient list commitments using double layering. Essentially, you commit all the elements with standard commitment scheme and then hash the values together to get a shorter digest. What is bad news is that we don't achieve the desired optimal communication complexity. Essentially, because we don't know how to construct a list commitment scheme with a communication complexity O log N, there is right now O log N squared, so we are log N factor away from the optimal value. Now I would like to introduce some ideas on how to prove security. If you look at my initial standard, you see, who will commit all the database elements and then compute partial decommittment values. If he asks maliciously what should we do, we must construct a simulator, and the simulator gets a commitment value. The simulator is supposed to give the input to a trusted third party, so what it has to do is it has to take a commitment value and extract what is the database. If he wouldn't care about communication, we could use extractable commitments, and then it would be trivial. The simulator would use the extraction key and then extract all the commitments, but as the commitment is compressing, this is impossible. There are several options how to do it. First one is to do... Let's see the simulator, how it works. We do it straightforward way. We fix a randomness for the malicious server and the honest client, and then use this extraction to extract committed inputs, submit this extra trusted third party and construct halting predicates, and then fake the protocol execution and output whatever server outputs, and that's sufficient. There are two tricky parts, extraction and halting predicates. Halting predicates is actually trivial, because what we have is that we have a code for the malicious server and honest server, and we also have the randomness for the server and honest client. So for any possible client input, we can compute whether the protocol in the real world would end it with abortion or not, and we just pack it into the predicate and send it to trusted third party. It's efficiently computable halting predicate. So this is trivial. Non-trivial thing is the extraction. Again, since we have fixed randomness, naive way would try all possible queries, and for a single query it works. The slowdown is ON in the simulation, but it's tolerable, but if you make K queries in a row adaptively, then the slowdown becomes exponential. Now what is also known, it's a very obscure result by me and Dr. Buldas, is that every commitment is actually, binding commitment is extractable, provided that the adversary doesn't have auxiliary input. So if we could consider setting where the adversary doesn't have auxiliary input, we can use white box extraction. However, if you want to use the protocol, then you usually use in a context where you have preprocessing and prosperous testing, and you would like to have sequential composability, and for those settings, auxiliary inputs are kind of unavoidable, so this doesn't work. And what we are left with is that we use the old hammer and use zero knowledge proof of knowledge where the server just proves that he knows what he committed. And of course, then we can extract what he committed and then proceed with simulation. And depending what kind of zero knowledge proof we use, we get slowdown either one over epsilon or exponential in the number of rounds in the zero knowledge proof. Now what about the client? For the client this is kind of simple, except only one fact, that we have to submit something to the client commitment before we actually know how to open it, to which values to open. And therefore what we have to use is just equivocable commitments. And okay, this is the proof scheme, but what is important is what we do is that we use at the bottom layer equivocable commitments and then use any compressing commitment and then we get equivocable commitment and that is all we need for this construction. The second thing I want to kind of discuss is this conditional disclosure of secrets because it's a very nice protocol and it's usually used to convert input private semi-honest protocols to input private protocols and how does it work, is that you have a secret, server has a secret and he wants to release it but only if the client's input satisfies some public predicate, mainly that the inputs are in valid range. And then you have several protocols that where inputs are just encrypted, send it to the server and then send server just sends reply. Now how could we make this kind of consistent? The idea is the same. We let server just commit the secret and then use this ordinary CDS protocol to fetch the decommitment key. And again, if you don't manage to fetch the decommitment key, valid decommitment key the receiver just holds. And since we have only one commitment we can use here theoretically extractable and equivocable commitments or use serial knowledge proof of knowledge that the server knows what he committed. So to just summarize what I did was that the most important thing of this talk is actually the model of consistent computation. It took us like one year to get it down in correct way and I think we believe that this is the right way how to model a setting where you want correctness guarantees and input privacy but you cannot afford serial knowledge proofs. It's very close to this K-leakage model but it's a bit different. And of course we have those constructions for the oblidist transfer and CDS which seem rather straightforward but what is not so straightforward is the security proof and the necessary formalism namely formalism of list commitments and how you define that list commitment is equivocable, extractable and so on. The big question which we didn't solve in this article is that whether two round consistent protocols for two parties exist and if they don't exist if they do exist it would be very interesting for instance for oblidist transfer because then it could make the PCP proof shorter. Another thing which is in the article is that notice the construction was rather trivial that we committed the inputs and then used oblidist transfer or CDS to get the decommittment keys and actually what can be proven is that consistent computations and commitments are closely related namely if you give me a protocol for consistent computations I can construct a specific commitment from it. So commitment must be inside of those how they are actually related this kind of open question and we didn't give you a generic construction for consistent computation for instance for two party case but this construction is actually in the article by Mohassol and Franklin where they talk about K-Leakish model they have a protocol where you run two circles in parallel at the end of that you do some verification check and this actually could be a consistent protocol however nobody has ever analyzed and formally proved that this is a consistent protocol and the one thing which is kind of also interesting is that the model right now allowed any kind of alting predicate but you can specify a very specific alting predicate for instance like linear predicate or somehow equations or something and you could try to kind of limit it Okay and as I see I'm out of time so I'm happy to finish right now Let's take a speaker Do we have questions? Okay, I have a question So about the setup assumptions that you use Is the previous construction in the common reference string model or standard model I saw those Okay, those parameters You can explain It's a proof theoretical trick Essentially you can always get rid of this trusted setup by replacing with a standard multi-party protocol and for most commitment schemes setup is trivial for the oblivious transfer protocol usually you have to generate a valid public key such that you know the corresponding secret so you can get rid of all that but it's keep to proof simple and straight forward Yeah, the point is that if you run a protocol to simulate it then there is a communication complexity that comes from that protocol also, right? Of course, yeah, but Are there questions? Okay, we can thank again friends