 hello, this is a joint talk with Elly Ben-Sasson, Edo Bentov, and Inon Choresh. First of all, to set expectation in this talk, I will mainly focus on the background and on our results. I will not go deep into details of how we achieve these results. You are welcome to talk with me after I'll check in the paper. Okay, we start from the celebrated work of Babay et al. suggesting a user can send a program for an untrusted server to execute and return an result. But instead of only returning a result of the execution, this untrusted server provides a proof of integrity showing this result is indeed the result of the program the user asked for. Additionally, it might be the case this program has a place for some auxiliary input. The prover can add an input, a secret input into the program as well and makes this proof a proof of knowledge of an input satisfying this program. The BFLS construction had many great features. I would like to focus on two of them. One of them is succinct verification. The verification of the proof is polylogarithmic on the length of the execution. The other one is public coin randomness. It means that using common compilers, the prover could generate a noninteractive argument that can be spread and published to anybody and anybody would be able to verify it with no trust assumptions. Recently, there is a lot of work on systems for designated computation Unfortunately, none of these works was able to achieve both succinct verification and using public coin randomness. Two years ago, PSI was presented achieving both succinct verification and public coin randomness. But the performance of the PSI system was not applicable for real-world usage. In this work, I present ZK-STARC, a system which has both succinct verification, public coin randomness, and having performance appealing to real-life usage. This is the message I would like you to take with you today, that STARC is both succinct verifiable with public coin randomness and applicable to real-life usage. This is why you will see this Venn diagram a lot. Let's continue to some distinction of the STARC system from other systems in the public coin or succinct verification, which are not in the intersection. A very major difference is in the ultimateization technique used. One common ultimateization technique is to describe the program as a circuit. In this case, the verifier, as it has to know what the program is, is not expected to be sublinear in the circuit side. Such we cannot expect succinct verification. This technique is indeed very common in the system which have public coin randomness but no succinct verification. Another technique is very similar, but it takes the circuit describing the program and preprocess it using some setup phase. This setup phase outputs a succinct verification parameter that can be used by the verifier to verify a proof succinctly. Unfortunately, one of the drawbacks of this technique is that in the setup phase, information, if information leaks from the setup phase, it can be used to forge proof. In many of the systems, in the setup phase, it can be used to forge proof in many of the setups. It is suggested to use MPCs to do the setup to reduce the stress assumption. What we do in struct is completely different. We are not describing the program as a circuit. Instead, we are describing the transition function as a circuit. You can think of the CPU that we have in every computer as a sum circuit. This circuit is constant and has some constant complexity, but it can be used to execute both short programs and very long programs. This is the case for Stark as well. We arithmetize only the transition function, and we use it to verify both short traces and very long traces. This is where the succinctness of Stark comes from. I would like to compare to other systems, first of all, asymptotically. We can see the three last lines in this table represent the work in the intersection. We can notice two great features in the three last lines. One of them, there is no red cell. It seems nothing here is really better or awful. The other one, there is no green cell. It means there is still a lot of room for research. Okay. I will now continue by showing how specifically in this line of work we have improved from Sight to Leab Stark General and Leab Stark with logarithmic RAM. When the last two lines representing both Leab Stark are for the following scenarios. Leab Stark General is for general computations using RAM access. Leab Stark with logarithmic RAM is an optimization that we have for the case a program accesses only at most logarithmic RAM in the length of the trace. So first optimization. From Sight to Leab Stark, it's one of the optimization that we introduced in Leab Stark is to use the Fry-Low Degree Testing instead of the Ben-Sasson-Soudan Low Degree Testing. The Fry-Low Degree Testing was introduced last year by the same authors of this paper. When we are going to Leab Stark with logarithmic RAM, it might be a big technical, it's not familiar for a lot, but we drop a De Bruyne Shuffle Network which is used to verify RAM access is consistent. I will not dive much deeper into this because of lack of time, but this is an optimization that we were able to do. I will now dive into concrete measurements and comparison to other systems. For the concrete measurements, we wrote a subset sum exhaustive solver. This solver was written in tiny RAM assembly and compiled both to psi and Stark constraint systems and to Leab-Snark circuit. The parameters of this Leab-Snark circuit were used to estimate the performance of other circuit-based systems. So here we can see that the Leab-Snark verification is more efficient than any other public coin randomness system verification just as we expected and outperformed only by the Leab-Snark verifier which requires a trusted setup. You can see that even though the time it takes to verify Stark-proof is about 10th of a second and it is insensitive to the length of the execution so it wouldn't change much even for very long executions. Let's check the argument side which is like the proof but it is argument because it requires some computational assumptions. We can see that the Stark-proof is longer than the proof of many other systems. It requires a few hundred of kilobytes but again it is insensitive to the length of the computation and it would stay this way even for very long computations. About the proving time, we can see that the prover is faster than any other prover we compare to at least by a factor of 10. One of the reasons is that some of the systems require intensive elliptic curve arithmetic while proving. While Stark requires only finite field arithmetic which is much easier for a CPU. Another reason could be just engineering. It might be the case that more fine optimization to other prover would reduce this gap. I would like to summarize by reminding again that ZK Stark has both succinct verification and public coin randomness and its performance is appealing for real life usage and if this is not enough for you, it is quantum secure as well. Thank you. Questions? Any questions? Let's thank the speaker again. The second part of this talk is about Libra since zero knowledge proves with optimal prover computation. All right, thanks for introduction and thanks for Michael for the last talk. Now I'm going to introduce Libra which is another zero knowledge proof system and this is a joint work with Ja-Hen, Yupeng, Babis and Don. And I need to clarify that this work is different from the Facebook Libra project and this work is submitted in February so before Facebook Libra released, so it's a nice coincidence. As many talk previously mentioned that zero knowledge proof allows the prover to convince the verifier about the validity of a statement which is a validity of a statement which a statement is modelled as circuit and in all proof system we have companies, sonnets and zero knowledge properties. A permissed property says that if both parties are honest, then the verifier should always accept. The sonnets property says that if the prover is malicious, then the verifier should not accept except for linkageable probability. And zero knowledge says that if verifier cannot learn anything about the witness W from the proved pi and there are three major criteria about the zero knowledge proof protocols. Prover time, prover size and verification time. And the category we are focusing on is protocol with succinct proof size and fast verification time. And there are many existing zero knowledge protocols that satisfy these properties. The most widely used one is SNARK and their implementation, LIPSNARK. The LIPSNARK supports all kinds of functions and it has constant proof size and constant verification time. And even they are widely deployed in the real world such as Zcash. However, there are slow improving things and they have a function dependent trusted setup. To address these problems, recent years there are many protocols proposed by different groups. And here is a list of these protocols with full implementation and categorized by the underlying technique. Our protocol is an interactive protocol based on the framework proposed by Vsql from John et al. Actually all of these protocols are following the same framework. Now I'm going to introduce the framework proposed by John et al. So first we have a witness and the prover is going to commit to the polynomial that is defined by the witness. Then they will engage an interactive proof protocol that's called GCR protocol, W-efficient geological proof. And this protocol will reduce the validity of the output, Y to the validity of the committed polynomial defined by the input W. And finally it will verify we'll send a random challenge to the input and the prover will open the random challenge open the input polynomial to this random point with proof of coordinates. This completes the whole framework and all constructions follow the framework and our main contribution is on the GCR protocol. First, we will introduce a linear time GCR prover for arbitrary layer of the circuit which is optimal. And second, we will add the zero logic we will add the zero logic property to the GCR protocol without any computational overhead. So finally we will get a zero logic argument scheme with linear time prover and succinct proof size and verification. So for the rest of the talk I will tell you some technical details about this prover and about the zero logic conversion for GCR protocol. So first let's discuss about the linear time prover. The GCR protocol is based on the sum check protocol. So let me introduce the sum check first and the sum check protocol is a fundamental building block for many applications in cryptography. The goal for sum check protocol is to check the summation of the polynomial F over boolean hyper cube that is equal to some claimed constant H. And this is an interactive proof protocol and finally it will reduce to a random point and verify you need to gain oracle access to this polynomial evaluated at this random point. So if we have oracle access the proof size will be log n and verification time will also be log n. Let's use the sum check as a building block and let's introduce the GCR protocol. So I will use the vector location to represent a set of sorry array of variables which are size is log n. And so first we have to define the polynomial used in the sum check protocol. Let's take an example the input layer. This polynomial is a multi-linear extension of the input layer and it will agree on the input value on boolean hyper cube. For example, 0 you agree on the first input and for the rest of this polynomial it will agree on the corresponding gate value. And to make it possible we will use this relationship to define these polynomials and this polynomial is related to the previous layers of polynomial so let me explain it in details. The mult and the i is called the wiring predicate. The wiring predicate is defined as follows if there exists a gate g0 and is connected to u0 and v0 then the mult-variable predicate will evaluate to 1 from this input and otherwise it will evaluate to 0. So in this formula if g is the mult gate and u is the corresponding input the value of this evaluation of this polynomial will be the multiplication of the following input and the same for the ad gate. So this polynomial will always agree with the gate evaluation and the complexity of the GKR protocol is the sum of complexity of all of this sum check protocol. So in order to improve the GKR protocol we don't need to improve the sum check protocol. Let me introduce some prior work on the sum check protocol improvement to make it a linear sum check protocol. So Taylor introduced a dynamic programming based method in crypto 13 and the first step of this method is to initialize a lookup table of the polynomial over the whole cube. Then it will use a dynamic programming to reduce the polynomial. The size of the next polynomial is equal to the half of the size of the previous polynomial. So the total size is linearly according to this formula. And the final block of this this table is only consists of one element. However in GKR polynomial it's not efficient because we have two variables each of them have log n variable. So in total it's 2 to 2 log n which is n squared and the initialization is n squared so this is not efficient for practice use. So in our paper we propose a new way to address this problem based on the protocol proposed by Taylor we divide the sum check into two phases. The first phase is only about the variable u and the second phase is only about the variable v. Here we define a polynomial h as follows so these two things will be equal actually and this part of the polynomial will only about u. So as Taylor's protocol required we will need to initialize lookup table for this polynomial. And vu is quite simple since it's agreed with the circuit evaluation so we can use the circuit evaluation as a lookup table. And h is not quite easy but in our paper we introduced a way to initialize h through a linear scan over the circuit. And since the time is limited I will not introduce the initialization process for it too. Then we will run the Taylor's method and then we will complete this phase reduce u to a randomness proposed by the sum check. Actually it's a constant so this part this middle term is a constant and the remaining term is the actual work for the sum check. Then we are doing the same thing as the phase 1 except for h is defined in another way. And this mod function can also be initialized in a linear scan of the circuit. And then we run the Taylor's method again to complete the sum check. So now we have a linear time prover. Next we are going to introduce second technique the zero logic GKR protocol. In this section we will introduce a zero logic conversion for GKR protocol without any computational overhead. So let me tell you why GKR is not secure because it needs some evaluations. In every interaction into our leaker there is a polynomial evaluation about the current layer circuit. The polynomial you evaluate to a random point and this evaluation will become a linear combination of the value of the gate. So it's not secure it will leak some information about the gate value. And there are several private work to address these problems. In GKR is secure. They use Kreml-Denger transformation which will incur computational overhead because in Kreml-Denger transformation addition will become a multiplication and multiplication will become explanation. This will incur like 10 to 100 times lower compared to the previous GKR protocol. So to make GKR we will use some new approach that is masking polynomial and it's inspired by the KSR at all. So in their paper they add a random masking polynomial delta to the original function F. And in this way the sum check interaction will not leak any information because this is a random polynomial it will look like random. However in their construction delta is as big as F and we notice that we need to do polynomial commitment for each masking polynomial to ensure the evaluation of the masking polynomial is correct. So if delta is big it's computationally hard to make the commitment and open the commitment at some random point. So this work will mainly for theoretical interest and we really need to construct a delta that is small for computationally efficient. Let me give you an intuition about how it is constructed. First leakage is small. The leakage of the whole GKR protocol is only polynogrammistic. So to cover the polynogrammistic leakage we intuitively only need polynogrammistically sized masking polynomial. And we have a nice construction about this masking polynomial. It's all of these variables separated and the size of the polynomial is only polynogrammistic so it will not incur any computational overhead. So with a linear time prover and with a conversion for GKR protocol to zero logic without any computational overhead we will finally have a linear time prover and succinct verifier succinct proof size zero logic proof system. And concretely speaking our proof time is the best among these protocols and our verification time is less than one second and our proof size is reasonable. And we also have an open source representation on Github you could check it out by yourself. And finally it comes to conclusion we have a linear time prover, we have efficient zero logic conversion for GKR protocol and combined we have a linear prover fast verification, fast sorry succinct proof size zero logic proof system and thank you I'm happy to take questions. Questions? All right, okay So let's thank the speaker again.