 Okay, so I want to talk with you about her new, not the newest, but proof of concept implementation to solve computational integrity. This is a joint work with Eli Ben-Sasson, who is here with us. אידור בנטוב, אליסנדר כיאזר, ילגה ביזון, דניאל גנקין, מתן חמיליס, אבגניה פרגמנט, מתן סילברשתן, אלן תרומל, ומדרס וירזה. Okay, so in this talk we first see what is the goal of our research. We will see other approaches. We are not the only ones trying to solve such problems. An overview of our solution, which is a proof of concept implementation, and we will finish with some concrete measurements of our implementation. Okay, so first of all, what is our goal? Our goal is a system to solve computational integrity, and it should be close to practical. Okay, computational integrity, or known as well as very fine computation, or delegation of computations, is a well-studied problem in cryptography. And many work was done in that field, mainly in the theory side, but recently there are some applications as well. So we want to produce a practical computational integrity system. The message I would like you to take with you from this talk is that a practical solution does not require a trusted setup phase. Okay, which is a very common solution that I guess many of you know of. So our result, our result is a system we call PSI, stands for Scalable Computational Integrity. This is the first implementation of a theoretical construction that achieves all the four points. It's publicly verifiable, does not require a trusted setup. It is universal, meaning we can construct a proof for any general program, and it has excellent verification. And those of you who are still trying to fill their Bingo cards, it is quantum secure as well. Okay, now we'll scan some other approaches. There is a great line of very good work done in the field, which is based on private randomness. Those are designated verifier solution or trusted setup systems. They have great advantages as really tiny proofs or arguments technically. Only a few hundreds of bytes in the length of the argument, and the verification is very efficient, very fast verification, concretely efficient. They have an obvious disadvantage that they are designated verifier system or can be used with trusted setup instead of being designated verifier. Another class of systems is the non-universal systems, systems that are applicable only for a restricted class of programs. Those systems have a good advantage that they can be implemented without any cryptographic assumptions. Okay, but of course the disadvantage is that they work only with a restricted class of programs. And last, there is the class of systems that have non-succinct verification. Those systems usually have very efficient, concretely efficient prover, but the verification time is linear to the length of the execution. So after we've seen what our system is not, now we will see what our system is. Our system is based on classical theory, a theory from the late 80s, the theory of PCPs. We use some more advanced research both in recent asymptotic improvements and more concrete improvements from theoretical papers. And I would like to first show you what are our cryptographic assumptions and an important part of all the systems in this field. So first of all, our internal protocol is in the IOP model, Interactive Oracle Proof Model. It's a model similar to PCP, but in the PCP, the prover generates one big proof and goes away, and then the verifier queries a small fraction of information from this proof. In the Interactive Oracle Proof Model, the prover and the verifier can interact, and the prover can send many such big proofs as response to verifiers queries. We will see a bit later in this lecture how this model helps us to achieve more efficient systems. So in this model, our system is provably sound, no need in any assumptions, but unfortunately this model is non-realistic because the verifier can't really receive proofs that are bigger than the complexity that it can run, its time complexity. So we compile it to argument systems, which are realistic systems, and for that we use the Random Oracle Model, or if we want to compile it to a non-interactive system, we use the Fiat-Chemir realistic. Eventually when we want to implement the system as we did, we treat cryptographic hash functions as a Random Oracle. Specifically in our system, we use the SHA-256 hash function. Okay, the protocol overview. The protocol overview is very similar to the classical protocol of Kylian from about 30 years ago. The prover generates some big proof, commits with it using the Merkel commitments, the verifier receives the commitment, and queries some information from the proof. The prover reveals the information and the commitment paths, and the verifier checks that everything is okay. The novelty compared to this Kylian protocol is that in our protocol, we allow interaction of the verifier with the prover, and this is done in order to reduce the load of the prover. We will now see exactly where we use this interaction. Okay, this interaction is used for the low-degree testing, which is an important component in our system. I will give now an informal definition of a low-degree testing. In the low-degree testing, a verifier looks at some evaluation of a polynomial, and every normal human being, the verifier wonders whether it's the grid bounded by some 2 to the n value. Unfortunately, the verifier's complexity is too low to verify deterministically. Okay, it's polynomial in n. Then a prover appears and tells her, of course, this polynomial is low decree, can't you see that? The verifier tells him, no, and I don't know you, why would I trust you? So the prover tells her, okay, don't trust me, verify, and provides her a big proof that the verifier can test. A big proof, which is classically called a PCP of proximity. Okay, so this is a very informal definition of low-degree testing. Okay. We use the low-degree test of Elements Aston and Madous Sudan. And we are the only non-implementation that really implements low-degree, succinct low-degree testing, and even more, we are the only implementation that implements proof-composition. The low-degree testing of Bensos and Sudan is a special case of proof-composition. In contrast, the very common systems that use a trusted setup or verifier designated don't verify an evaluation is low-degree. They use cryptography to enforce the prover to write only low-degree polynomials. Okay, so this is a big difference between the two approaches. Okay, so we'll scan now the protocol of Bensos and Sudan. This protocol, they use a mapping of univariate polynomials into bivariate polynomials that reduces the degree. Technically, the degree of the bivariate polynomial, both in its rows and its columns is bounded by a square root of d, if and only if the degree of the original univariate polynomial is bounded by d. Now, they continue to use this mapping recursively on every row and column of this bivariate polynomial until they get to evaluations with degree small enough for the verifier to just verify deterministically. This construction results in something that you can think of as a tree of proofs. What the verifier does is it draws randomly some of the leaves of this tree of proofs. And remember, the leaves are of very low degree, low enough for the verifier to test deterministically. And verifies those leaves deterministically and verifies locally the consistency of those leaves to their parents and recursively up until the verifier gets back to the root of the tree. So this is basically the idea behind the test of Benzeson and Sudan. An observation is that most of the sub-proofs in this tree are never accessed by the verifier. But in the PCP model, because the prover does not know in advance what would be the queries of the verifier, the prover must generate the full tree of those proofs. What results in proof size of at least 2 to the n times n, which of course bounds complexity of the prover as well and the complexity of the prover is at least 2 to the n times n. Although this is quasi-linear in the degree, it is still too expensive for practical implementations. In SI we use interactions to solve this problem. What happens in SI is that the tree is constructed layer after layer. In every layer that the prover is about to construct, the prover asks the verifier what are the sub-proofs the verifier would be interested to be in that layer. And it constructs only the sub-proofs that the verifier would really access. Because eventually the tests are local in the leaves and they require only to test the consistency to the root. This kind of construction does not preserve the soundness of the system because the root, the path to the root is constructed prior to the question whether the next sub-proof should be constructed or not. This results in much shorter proofs of length 2 to the n. A formal description of this method generalized to the proof composition technique can be found in a paper of me with elements of sound. We will move to some concrete measurements. In order to measure our system we construct proofs to two benchmarks and two programs. Both programs are solving the CoNP verifier and the Prover. We are solving the CoNP version of the subset sum problem. Technically the Prover wants to prove to the verifier that some set of numbers, integers that is known to both of them no non-trivial subset of this set sums to zero. We use two implementations. The implementations are written in a tiny RAM assembly that are reduced to our proof system. One implementation is the trivial exhaustive implementation, the Prover tests all possible subsets and the time complexity of this implementation is 2 to the n. But this implementation uses no RAM. Which is an advantage for our system because in our system using RAM introduces an overhead. It's independent of how much RAM is used, but the fact that the system use RAM requires a blow-up in the proof size with a multiplicative factor of 2 log the length of the execution. So the second benchmark does use RAM and this is a solution that is based on an algorithm that uses sorting. The complexity of this solution is lower is 2 to the half of n but it uses RAM as well. Okay. So here we have some graphs that later we can dive into that by request, but what we can learn from these plots is that those are the plots that are relevant to the Prover, those are the plots that are relevant to the verifier and we can see that the plots this is the time and the proof size are as expected by theory. The asymptotic behavior is as expected and the verifier's behavior is as expected as well. And concretely when we not asymptotically, but concretely when we look at the numbers and we compare the time that it takes to prove a program compared to the time that it takes just to execute the program on the computer we get a slowdown of about a billion. Okay. Maybe it doesn't sound so good to you, but it's pretty good because it means that we have a lot of more research to do in this field. And for the verifier we have a similar image. We expect that our verifier is asymptotically succinct and we expect the verification time to be much lower than the execution time of the program itself. But for those executions we see that we save time by verification only for very, very long executions. In this plot we compare our solution to some other approaches that solve a similar problem. So yellow plots are our system the blue bars is a system that has a non-succinct setup phase. The red bars is a system that has a setup phase but it is succinct and the violet bars is a system that has a succinct communication complexity but non-succinct verification. In the blue and the red bars that require a setup phase, the more transparent part is the overhead introduced by the setup phase. So those bars show the prover time in minutes, those bars show the communication complexity and notice the y-axis is log-scaled. We can summarize what we see in these bars by saying that our prover performance is competitive to other systems it's not very far. The verification is succinct but slower than other succinct verification systems. And the communication is succinct as well but it is very high compared to other systems with succinct communication complexity. I would like to use this opportunity to introduce you to a follow-up work which is currently in progress, a work in progress together with Eli Ben-Sasson, Edo Bentov and Inon Hollesh. This work uses the same approach as Psy, the same classical constructions but with new ideas. It includes zero knowledge, Psy does not include zero knowledge. It introduced new theory and we reduced the prover overhead drastically to about a million from a billion, it's a factor of a thousand. And for this system practical succinctness is enriched. Okay, we are much closer to getting to the phase where verification is more efficient than just executing the program. You can see that the proving time here is the lowest among the other systems. The verification is not as low as the system is based on private randomness but it's really competitive, it's really close. And our argument size, our proof size is much lower than what was in Psy. It went to hundreds of kilobytes instead of tens of megabytes which is about a thousand times longer than the argument size in the systems that are based on private randomness. I would like to acknowledge as well I would like to thank the programmers that worked with us. Ohad Barthal, Yor Greenblatt, Shaul Kfir, Gil Timnat and Arnon Joghev. And finally our summary I introduced you to our system Psy and we have seen some concrete measurements of the system and something just about a minute from the lecture before the lecture, the code is public on this link. Okay, thank you.