 So I'm going to talk about the same argument and its inception from groups. This is work I made with Yuvali Shai, Rafałowski and David. So our motivating questions as we come to this work is first, how short can a proof be? So as already a long series of work aiming to get proofs shorter and shorter, can we push farther this frontier? Our second question, can we build with its inception from 20th century typography? Those questions turned out to be related and we formalize them both in the generic group model offered by SHU at 97. For the first question, the argument that we construct while formalizing the generic group model actually used well-known assumptions from previous works such as the linear only encryption and for the second question, building with its encryption from generic group model is already a huge breakthrough compared to previous works. So the first question, how short can a proof be? There is already line of work initiated by gross 2010 and then by generic 2016 with 1000 bits which is highly practical. For the less than 1000 bits regime, there is one construction made by Bitansk et al with 500 bits, but it's impractical as it relies on classical VCP and there is a series of works started by the high et al, then for only et al, then for ne et al which ends with proofs with less than 500 bits, but they are impractical as they rely on obfuscation. So can we improve the succinctness of practical proof systems? Meaning building a practical proof system with less than 1000 bits for flanks? Turns out that we can, but at the price, we end with a designated verifier argument, non-negligible soundness and higher poor running time. For the second question, of obtaining witness encryption from 20th century cryptography, so current construction of witness encryption, assuming multi linear maps or unexplored algebraic structures, can we use a generic group in order to construct witness encryption? So this question is similar in flavor to a question raised by Impagliazo and Rudi many years ago. We can divide the cryptographic world into mini-trips, primitives that are constructed in the random oracle model, cryptomania primitives that are derived from key agreement streams and public encryption, and obfustopia primitives that require obfuscation. Impagliazo and Rudi ask whether we can obtain public encryption in the random oracle model and discover that the answer is no. We answer a similar question, we ask whether we can obtain witness encryption, primitive so far constructed only with obfuscation, with the generic group model that has a cryptomania flavor. Turns out that the answer is yes, assuming plausible but unproven hardness approximation hypothesis. Let's dive deeper. Our first goal is to designate the verifier snark, or succinct non-interactive argument. In this snark, we have the poor verifier and the verifier, and the poor verifier wants to convince the verifier about the validity of some claim. In order to do so, we send him a short poor file, and snark wants the poor file to be as short as humanly possible. And to do so, we allow a generation of common reference string used by the verifier and the prover at the prep processing phase, as well as the secret state ST, that only the verifier can use, that's why it's a designated verifier setup. In addition, we demand of course complicity, so if X and W are in the relation, the verifier is convinced with high probability, and soundness, if the malicious prover is efficient and X is not in the language defined by R, the verifier should reject with high probability. Our mindset as we come to this question, in contrast to previous walks, so previous walks, focus on publicly verifiable snark in bilinear groups, and we want to beat the succinctness, we want to get less than 1000 bits proof length, and we want to use less structure, so using only standard groups. In order to do so, we are willing to compromise designate the verifier snark or laconic argument and not publicly verify the snark like previous walks, we are willing to compromise on the soundness error, most crucially we are willing to compromise on imperfect completeness, this is the first work as far as we know that checks this relaxation of imperfect completeness, and we are also willing to compromise on over and verifier complexity, nonetheless as we'll see soon, our constructions are highly practical in summary life scenarios. Our second goal is witness encryption introduced by Gael Detell, so in witness encryption, which is defined with respect to a relation R, we have the function encrypt that gets message to a encrypt encrypt and an instance X, it outputs a ciphertext that is unfet to a function encrypt that gets the ciphertext and a witness W, who requires that the encrypt recovers the message M only when X and W are in the relation, and in case where X is not in the language defined by the relation R, we demand semantic security. So our theoretical contribution is that we present a witness encryption construction that relies only on the generic work model and honest of approximation hypothesis, this honest of approximation hypothesis may be proven unconditional in the future and thus cryptography speaking, our witness encryption relies only on the generic work model, this rolls out unconditionally rolling out witness encryption in the generic work model, unless there are some serious advances in approximation algorithms, which is very surprising because similar rolling out results in the generic work model are known to similar flavor primitives such as identity based encryption, and in addition, we present new avenue for witness encryption constructions that may lead to better results in the future. In the practical frontier, our design to verify a snag requires only 512 bits compared to the 1024 bits at goal 16, it requires only two group explanations, so 10 times faster than goal 16. However, it does require a relatively large common referencing and slow overtime, which is still acceptable for relatively small Boolean circuits, as we'll see soon enough. Thus, this design to verify a snag is very attractive when the verify is weak or somehow energy constrained, and the relation R has small Boolean circuits. For example, if we look on a Boolean circuit that computes a Goldrich one-way function, and as shown by Bortusk et al. recently can be done with 1500 wires, we require 34 megabytes of common referencing, only a few seconds of moving time, and for some of 1 over 128, the verifier runs in one tenth of a millisecond, and requires six megabytes of look of table. We refer to the paper for more discussions about the efficiency of our constructions under a various parameter setting. So, if you look on the world map succinct non-interactive arguments, we first have the grow 16 snag, it is in the linear group setting, and relies on linear PCB with negligible sounds and perfect completeness. In the linear group type, we have two candidates offered by Witanski et al. The first one is eight elements and relies on linear PCB, and the second one has two elements and relies on classical PCB. Our first new construction has two elements, but it relies on linear PCB and not on classical PCBs, that's the concrete efficiency that it presents, and it also has extremely fast verification, assuming that the setup algorithm generated a look of table with the size square root of the circuit size, or linearly in the circuit size for perfect completeness. Our second construction is a laconic argument with negligible soundness, and only two elements. This is the first laconic argument with two elements and negligible soundness. It does rely on classical PCB, and suffers from non-disabled completeness error. Our third construction, and perhaps the most interesting from a theoretical perspective, is a laconic argument with only one group element as an answer, negligible soundness. Again, it works in a linear group type, and has non-disabled completeness error. It uses unproven but plausible hypothesis regarding the occurrence of approximation of distances in linear codes, but it does imply that the execution scheme is no further assumption. So now let's dive deeper into how we are able to obtain all these constructions, starting with the completely efficient two elements designated the very first mark. So, our first point in this journey is linear PCB, introduced by Shietel, and later developed by Bitansky et al. So linear PCBs are defined over a field F with respect to relation R, and we have the povers that generate a poof pi over the field F, and we have the queries that asks the pover k queries q1 till qk over the field, and then it generates, in addition, a secret state st. Next, and we have the decision procedure that gets the specific instance, the secret state st, and also the result of all the inner product of all the queries with the poof, and then it should output accept or reject any demand, completeness, and tenders in the usual manner. So what is the connection between linear PCB and snug? So Bitansky et al. introduced a generic compilation scheme for one query linear PCB to one cipher text snug that works as follows. So in the linear PCB world, we have the povers that output the poof pi, and we have the verifier that first asks a query q, and then gets the inner product of this q, and pi and decide whether to accept or reject. In the snug world, we have the preprocessing phase that would write q in the sky, but it would write the encryption of all the elements of q as a CRS with the public key. Now the pover will be able to compute the result that is the encryption of the inner product of q and pi, and is able to do so because we use linear only encryption, meaning encryption schemes that only allows additive homomorphism to happen, and not other types of functions over the site, over the original plaintext. Now the verifier will know the secret key as well, so he would be able to decrypt the answer and call the original linear PCB decision procedure. So as we'll see, we use some variant of algorithm encryption, meaning that we need two group elements in order to encrypt one answer. So we are searching one query linear PCB that does not rely on PCB, unfortunately, as can be seen from this table, every known linear PCB in the past either has more than one query or relies only on a classical PCB. So we need to get some new linear PCB that has only one query and does not rely on classical PCB, and we do so by presenting a packing technique. So what is the intuition? So says that we have a k-query linear PCB, the verifier asks question q1 till qk and gets an answer k answer a1 till ak. And now let's assume that the linear PCB are bounded, meaning that for every possible randomness by the query algorithm and for every possible honest proof generated by the prover for every instance, we have the attribute that if we look what the answers are over the integers, all the answers are smaller than some small integer b. If this is the case, we can simply pick all these answers into one. How do we do it? We'll simply send one query. So this query would be a geometrical series of all the queries with weight with the suitable factors of b. And now we will get a single answer and now since we know that for the honest prover case, all the original a1 till ak are less than b, we're able to get this number and extract from a1 till ak and call the original verification procedure. So this is the intuition. And now what linear one encryption we use. So we need encryption since it works in the generic group model and is homomorphic. One candidate is El Gamal, which as I said has two group elements for one ciphertext. And in order to get an additively homomorphic version of El Gamal, because El Gamal is only multiplicatively homomorphic, we'll send the encryption of g to the power of the relevant query for every query vector q. And this variant is proven to be a linear one encryption in the ggm. Now since the plaintext are g to the power of the query instead of the query itself, the verifier after the scripting would get an answer g to the power of the relevant ai. And now this means that unless the decision procedure on a1 till ak is linear, the verifier must solve the discrete log problem, which is a bit of an issue and as we'll discuss later. And so we refer to the paper for the modified pecking techniques that actually preserve the summits because here we present only the intuition. A two query variant of the Adam or linear TCP and discussion of why the Adam or linear TCP is bounded and optimizing verifier can be a tail bound and preprocessing lookup table, in particular how we end up with this discrete log problem that the verifier needs to solve. Our next contribution is laconic argument with two group elements from hardness of approximation in code. So our motivation is to get a negligible soundness, but the previous two elements now that we've seen has noticeable soundness error. And this is somewhat inherent, right? Because the verifier had to solve the discrete log problem. However, if the verifier had the linear decision procedure, it could be made negligible. Unfortunately, growth can prove the barrier that any for any hard language L, one query linear TCP for it with linear decision procedure is unlikely. Fortunately, his lower bound assumes a negligible complete error and we are able to overcome this barrier by presenting one query linear TCP with noticeable complete error. How do we do it? We start with something that called gap minimum weight solution problem. So in the instance of the beta gap minimum weight solution problem over field F as a matrix A vector B and an integer D such that in the yes case, there is a vector V with humming weight at most D such that AV equals B. And in the now instance for every vector V with humming weight less than beta times D, AV does not equal to B. We prove a theorem which is a directed adaptation of theorem proved by Harsha et al. That for every finite field F such that the size of the field is less than exponential in N, every constant C greater than zero and beta log to the power of C of N, this problem is NP hard. Now how do we go from instance of this problem to a one query input dependent linear TCP which would lead later to two element laconic argument. So we generate R which is a randomly generated vector, E which is a small noise vector. So every coordinate of E is zero with high probability and random otherwise. And the input dependent query is RA plus E and the verify would expect the inner product of R and B. Why does it work? So in the yes case when AB is the minimum weight solution problem instance, there is a low humming weight factor V such that AV equals B and this vector will be used as the piece linear TCP proof. Why does it work if we take the query and compute the inner product of V with V, we get your RB plus EV. EV is most probably zero because ENV are those very sparse vectors and the inner product of R and B is indeed the answer that we expect as the right answer. In the now case, there is no low humming weight vector V such that AV equals to B. And then the proof has two strategies. Either it be X a vector V such that AV equals B, but this vector is dense. And in this case, the inner multiplication of ENV would be random because now V is dense. Either it shows a vector V such that AV is not equal to B and then this element would be random. And again, the verify wouldn't get convinced except negligible probability. So now we can take this one query input dependent linear TCP and pack it with the Elgamal encryption in a Bitansky et al. compiler stream and get two elements, laconic argument with negligible soundness. But we want to push farther. We want to get witness encryption. So what is the connection between witness encryption and all of those laconic arguments we talked about so far? The connection is something that called predictable arguments. So predictable arguments introduced by Faunu et al. At every round, there is exactly one predictable answer that may cause the verify to accept. All the rest of the answers cause the verify to reject. So Faunu et al shows that predictable argument is perfect for completeness and negligible soundness imply witness encryption without further assumption. This is a result that we extend to the noticeable completeness case. So if we had laconic argument which is predictable in the generic work model, we would have witness encryption. So we didn't just see a predictable argument. So recalls it in the one query input dependent linear TCP. We had one answer that we expect. However, after encrypting it with bitansky et al compilation, now we practically expect every possible encryption of this answer. So it's rendering our protocol to be unpredictable. Maybe we can do something like encrypting all the query in the exponent, which is a deterministic method. So the said answer is no, because this might leak the error used in the query. So recalls it in the query. We used the matrix A as it belongs to a minimum weight solution problem instance. But maybe there is some sparse vector V, such that AV equals zero. In that case, the prover would just take the query and compute the inner part of the query with V. Now he would have your ENV. Since ENV are sparse, this would be zero with some non-negligible probability, and otherwise it wouldn't be zero. And the prover can look at this expression and learn what the error vector is, and thus break the scheme. So if we knew that for every low ham and weight vector V, AV is not equal to zero, this attack would be invisible. And this is the exact intuition that led us to look on the minimal distance problem. So for instance of the beta gap minimal distance problem over field F as a matrix A and integer D, subject in the yes instance, the code span by the matrix A as distance of at most D. And in no instance, the code span by the matrix A as distance of at least beta D. What is known about this problem? So if there is already a long theorist of work, aims to measure how much this problem can't be approximated, starting with demeritel, continuing with chengetel, and latest by austenotel. And the last error result shows that for some vector which is omega floggen, and every field with prime size polynomial in M, there is a quasi-polynomial time reduction from set to the minimal distance problem with gap beta. We need something which is extremely similar, we need it for the same gap parameter, omega of floggen, and some field which is slightly larger, so quasi-polynomial in N, there is a polynomial time reduction from set to the minimal distance problem with gap beta. This hypothesis looks feasible to experts in the field that doesn't see any barrier to prove it unconditionally in the future. And in addition, the positive results of approximation of the minimal distance problem are extremely far from this gap parameter and are more close to a linear issue. So that's what causes us to believe that this hypothesis may be proven unconditionally in the future. So how do we go from minimal distance problem instance to a predictable argument? First, we generate RNC, which a randomly generated vector, S, which would be a random scalar, and E, like before, small noise vector, so every coordinate is zero with high probability and random otherwise. And here H is the parity check matrix of A from the minimal distance problem instance. Now we include it in the exponent, so the encryption is deterministic, and the very firm message is G to the power of this Q, and then C in the clear. We expect the answer G to the power of S. So we analyze this predictable argument in the generic work model, meaning that all the prove can do is just taking linear transformation of the query and test it. So what we have to prove is that in the S case, there is linear transformation that would yield G to the power of S, and in the no case, every linear transformation that the prove applies on this query wouldn't give him his information as it will be random. So let's look about why it case. So in the S case, there is a low Hemingway vector V such that Hv equals zero, because H is the parity check matrix of A and V would be a low rate cold world of A. This vector is just a proof strategy. So if we take now G to the power of the original Q and inner product with V, we get finally G to the power of S times the inner product of C and V plus the inner product of E and V. Like before, the inner product of E and V is zero with large probability, and now the prove knows this term, and you also know the vector V in the clear, and you know the vector C in the clear since we sent it alongside the query, and thus it can compute G to the power of S. In the no case, there is no low Hemingway vector V such that Hv equals zero, and thus the prove can just try to apply some linear strategy. So the first option like before is to pick a dense vector V such that Hv equals zero, and then this result would be random, right? Because the inner product of V and V now that V is dense is most probably random. These other strategies to pick a vector V such that Hv isn't zero, and now this term would be totally random. So the prove didn't learn anything, and this is the intuition to the service we provide the formal proof in the paper. So back to witness encryption using Frono Hotel Generic Compilation, this predictable argument implies witness encryption. In the paper, we show that any one element for chronic argument can imply witness encryption as long as it has negligible service, and something that we call generic verification procedure, meaning that the verifier can be formalized as a generic group model algorithm. So to summarize, we show the first practical design of the verifier snubs with two group elements. We show the first two elements of the chronic argument with negligible service error, and we showed how to get witness encryption in the generic group model under a plausible complexity theoretic hypothesis. There are some open questions. So first, as I stated before, our design internet verifier snubs relied on the Hadamard linear capacity, which has a quadratic overall complexity. Maybe we can do better. Maybe we can present two elements of the chronic argument with perfect completeness, or improve the witness encryption result and prove it unconditionally in the generic group model. Thank you.