 So let's continue with the third talk of the session on untapped potential of decoding predicates by arithmetic surface and the application by Shuji Kasumata. Thank you for the introduction. So my name is difficult to pronounce so just please call me Shu if you see me offline. So this title is what I'm going to talk about is the untapped potential of encoded predicates by arithmetic circuits and their applications and it's a little bit ambiguous. So first of all, I'll try to explain what I did and there's two primaries that we focused on and this is not important what they are right now. This is just the primaries that I want to explain that I've worked on. And the first one is the pairing based verifiable random functions. And again, it's not important what this is right now, but the current situation of creating this is that there's a lot of construction of this, but we either require a strong security requirement or require a long verification key and more proof sites. And the other primitive that we focus on is the lattice based predicate encryption scheme supporting the so-called multidimensional equality predicates. And it's not important what this is, but it's just that we require a strong security assumption or an exponential decryption time to construct these primitives in the current manner. So without further explaining what these are, I guess a natural question would be, can we do better? And this is our main question and these two primitives may seem completely independent and actually they are. However, both of these constructions implicitly or explicitly depends on this particular predicate during construction. So this is our key insight into these two primitives and that being said, the summary of our result is that we detach these predicates with the cryptographic primitives and provide an efficient encoding of predicates into shallow arithmetic circuits. And to be more concrete, so our result is that we propose two encoding schemes for this particular predicate called subset predicate. And basing this on subset predicates, we construct, well, nice pairing-based VRS and lattice-based predicate encryption scheme for the multidimensional equality predicates. And one thing to note is that this pocket is not about circular lower bounds. So it may seem like that, but the thing is the encoding that we provide as the arithmetic circuit, it has to be compatible with the underlying algebraic structure. So we don't get this for free. So that's why we require two encoding schemes for the subset predicates, one for the VRS and one for the lattice-based predicate encryption scheme. Okay, so that being said, the overview of this talk is going to mainly focus on the VRS construction. So this will highly deviate from the presentation it took in the paper and all the arguments are super simplified for me to better convey the intuition. So first of all, what are VRS? VRS are short for Verifiable Random Function. And this was introduced by Mikali, Rabin, and Bhattana in 99 at Fox. And this is essentially a VRF that also allows you to prove that you computed the output value y in a correct manner. So what it is, is that when somebody says, I output this value y using this input x, it also allows you to attest this fact by producing this proof of pi. So this is the evaluation algorithm. So my input x, it's going to output y at a proof pi. And using this verification key, output by this generation algorithm, anybody can publicly check that this proof is correct. So they can be sure that this output value y was computed using x. And this has these following types of applications. And the syntax of VRF is very simple. It's basically a VRF and the only difference is that it has this verification key and a proof pi, and there's this new verify algorithm. And we have three requirements for VRFs. And one is the very obvious correctness, which states that if you evaluate this on this input x, then this should verify. And the second requirement is this uniqueness, and this is what makes VRS really interesting and difficult to construct. So uniqueness tells you that you have to, for all verification key and input x, there can exist at most one pair of y and proof pi that this verify algorithm will output 1. So this is a very strong requirement in the sense that this verification key can be possibly malformed. So it does not necessarily have to be generated by this algorithm. So this is why it makes it so difficult and why it can't be created from simply from VRFs and non-interactive zero-knowledge proofs. Because using a NISC proof, you have the efficient zero-knowledge simulator, so you can always do this. And it kind of defeats the purpose of VRFs. And the final requirement we have is this adaptive pseudo-randomness, which captures the notion of VRFs. So what this is, is that the adversary can look at any input output of his choice. He creates the challenger, and at some point he's going to ask for an output which he has not know the input. He's going to query the input to which he has not queried before, and when he receives this YB, he has to determine if it's a completely random element or an element outputted by this evaluation algorithm. And when this is very close to 1 over 2, we say this is adaptive pseudo-random. And finally for preparation, the tools we will be using in this presentation is ordinary symmetric bilinear maps. And the hardness assumption we will be using is this L-decisional Diffie-Hellman assumption. So we're giving this couple here and deciding that Z is in this format or a completely random element in this target group. And since this is a non-static assumption, we want L as small as possible because that will mean that it's a weaker assumption. So L being small is better. And the other crucial observation that we'll be using is that using this couple, we can actually compute up to a degree L polynomial of this format. So when I say there's going to be a polynomial, we want that polynomial to be as low degree as possible because that will allow us to use a very weak assumption. Okay, from now on, I want to talk about the previous works and how we construct BRFs in a very high level manner. So there's essentially two lines of works on BRFs, and the first one is a generic construction based on general primitives. And this is a very nice approach in a sense that it provides us valuable intuitions on how strong BRFs are as a cryptographic primitive. So we currently know that we require general non-interactive witness interstitial proofs and constraint BRFs to construct BRFs. However, taking this approach, since we don't know any efficient constructions of these two primitives, the BRFs that are constructed using these primitives are either highly efficient and require strong assumptions. And the other approach is the specialized constructions based on parents. And for this approach, there are many constructions that are either very efficient or based on weak assumptions. So this talk will be mainly focused on this line of work. So as I said, there's a lot of works along this line, and I want to explain how they construct this in a very high level way. And this is a template construction of BRFs, and most of the scheme follows this very general framework. So what they do is that for the generation algorithm, when it outputs a verification key and a secret key, it's just going to be a bunch of discrete log instances. And for evaluation on input x, what we're going to do is that we're going to compute this function f on input x and the secret keys. And this function f right now is going to be used as a black box. It's publicly known, but it's not important right now what this is. It's just that everybody knows this function f, and they're going to compute. Well, the evaluator is going to compute it on this input x and the input with secret keys. And output is very simple. We're just going to put this in the exponent here. So this is the pseudo random value y. And for the proof what we're going to put in is that this is a scheme specific element that would help us to verify. And essentially it's just going to be helper components of these kind of formats. And it will be clarified explaining this verification algorithm. So verify what we do is that we're going to compute the coefficient of this function f. And as I said, this function f is public, but we can't compute this function value because we don't know this secret key. But what we can do is that we can view these as indeterminates and compute the function form of this function f on input x. So that's what we're going to do. We're going to compute the coefficient of this function f. And then using the terms in this proof pi, we're going to iteratively check the validity of y. So what I mean is that if v1 is actually this format, we can check this by taking these two verification keys, gw1 and gw2, and take the pairing at c whether this d1 is in the correct format or not. And now if we know that this d1 is in the correct format, we can move on to checking whether d2 is in the correct format. And we kind of move up this ladder and at the very end, we can check that this y is going to be correctly formatted in this format because we know the polynomial form of this. So this is the general way to create a VRF scheme. And as I said, what do we use as this function f? And usually what people use is this thing called admissible hash function. And it is an information theoretic object. And in the essence what it is, is that it's a key function that secretly partitions the input space. So what it does is that this key key is only known by the person who instantiates admissible hash function. And on an input x, it's going to output zero if and only if x is inside this partition, this gray area, where this gray area is created by this key k. And it's going to output one otherwise. And this is a very nice tool for the simulator to secretly embed secrets during simulation. So I won't get into details how this is defined, but all the VRF schemes so far actually relies on admissible hash functions. And this is a useful tool to prove adaptive security in other settings such as IBEs or signature schemes. And finally, before getting into the detail, I want to talk about what is a good trait of a VRF scheme. So there's basically two measures of a good trait. And one is that we require a size, we require verification keys and proofs to be small as possible, because they influence the communication cost. And the other measure is that, well, obviously, if we want to prove that this VRF scheme is secure, we want it to prove it under the most weakest assumption as possible. So these are the two traits of a good VRF. So now I would explain artwork. And this is a very non-exhausted list, so there are a lot of other works on VRF, but these, well, mainly three works are the state-of-the-art VRF so far. And the top two, from Yegar and Hoffmann to Yegar from TCC, they were the first to construct VRF schemes with a weak hardness assumption. And when I say weak in this presentation, I roughly mean that these factors are sublinear in the security parameter or lambda. So this was the correct situation. And in this area in crypto, Yamada presented a short verification key and a proof-sized VRF scheme. However, they required a stronger hardness assumption compared to Yegar's and Hoffmann's and Yegar's. So our work is that we combine the two of these worlds and we acquire the first short verification key and a proof-sized, while keeping the hardest assumption weak. So this is our result. And our high-level approach is that we start from the Yamada 17 scheme and try to weaken their assumption. And this is just our starting point, this is the high-level approach that we took. So the actual construction is quite different. So the main observation of our work is that even if a function f is defined uniquely, the best way of computing it may depend on the application. So an easy example is that let's say this function f is in this format. So this is known by everybody and it's defined uniquely. However, how to compute this may depend on the application. So obviously this can be computed by adding 1 to n. Or it could just be n times n plus n divided by 2. But depending on what kind of algebraic structure this cryptographic primitive has, maybe we can't divide it. So maybe this above one might be the best way to compute this function. So this is our, well, very general intuition of this best way. So what we do is that we propose a better way to compute admissible hash functions for our work. And to explain that, I first want to show this very nice contribution of the Yamada 17 scheme. So in his paper, he realized that computing the admissible hash function is actually equivalent to computing this subset predicate. They're essentially equivalent. So computing this is the same as computing this in a very high-level way. So what it means is that this subset predicate is defined as this function. So this k was the secret key. So if k is inside x, it's going to equal 1, and otherwise it's going to output 0. And if you remember that picture, this is the implicit partition made by the admissible hash function. So the question now boils down to how do we compute this predicate, subset predicate, in our VRF scheme? So the previous approach took by Yamada 17 is that there's a lot of settings here, but this is not important. So they were presented by the Boolean circuit representation. So to decide whether k is in x to compute this, what you first do is that if this holds, then that means that all of the elements in the set k will be included in x. So this is an AND computation here. And to check whether this holds, this is the same as checking that either any element in x is equal to ki. So this is an OR, so this is an AND OR competition. And finally, since this is a Boolean circuit representation, we have to bit to compose these guys. So to check that ki is equal to xj, what we do is that we bit to compose it and check that all the coordinates are equal to each other. So that's another AND here. This, in a Boolean circuit representation, this will be an AND OR AND circuit. And essentially, written this in arithmetic fashion, this will be a highly multiplicative function, a polynomial. So this is a high degree polynomial, and if we want an instantiate admissible hash function, this will be roughly lambda times log cubed lambda, where the lambda is a security parameter. And if you remember me, so when I said high degree polynomial, the degree is essentially the L in the LDH assumption. So we require LDH assumption where L is in this format. So L is more than linear, so this is a stronger assumption than what we want. So our idea is that we can actually do a more efficient embedding by observing that there can exist at most one ki, such that ki is going to equal xj. This is because, let's say ki equals to x1, that they can't equal to any other elements because all of these elements are distinct. So using this very simple observation, what we can do is that since we can say that at most one clause is satisfied in this Boolean representation. And all of these is either 1 or 0, so this is actually equivalent to just adding these because they only take 1 or 0 values and since only one of them is satisfied, this is equivalent, functionality equivalent. So here we require the multiplication because we have to do the order computation. However, we can change that to addition. So what we gain is that we can change this multiplication into an addition. And by choosing which part to make it into addition, we can actually make this the most heggis term. So we can actually lower the degree down to log cube lambda by using this simple observation. And now L is pool log and while more of it's sublinear, so it's a weaker assumption now and this is the main idea of our work. However, there's a lot of technicalities swept under the rug right now and as I said, this paper is not about circular lower bound. So it may seem like it was about how to encode that predicate into the most shallowest circuit. However, as I said, all these encodings must be compatible with the algebraic structure that this BRF offers us. So that's why since the encoding is now different from Yavada's original scheme, the actual construction is pretty different. And furthermore, taking this linear format here, taking advantage of that, we can actually further obtain a quadratic reduction in the verification key size. So this is our very high level idea of our BRF scheme. And before we end, I want to explain a little bit of an extra of what we do for the predicate encryption scheme. So we can actually move that idea one step further in the sense that if we don't require to preserve functionality, we can further optimize the circuit embeddings. So as a quick recap, what we did in the previous BRF scheme was that to check whether K is in X, we did this representation. And as you can see here, if K is in X, it's going to output 1 and if not, it's going to output 0. So this is in a sense preserving the functionality of this Boolean value. However, the question is, do we really need to preserve the output value? And maybe in some cases, we don't have to do this. So a more general way of viewing of this is that if K is inside X, it's going to output an element in S0, and otherwise it's going to output an element in S1. And as long as S0 and S1 is disjoint, then this is actually constructing a partition in some sense. So now the range does not need to be in 01. So using this observation, what we can do is that if K is inside X, this is an if and only if statement. So if K is inside X, then that means that all the KIs must be in X too. So all these must be satisfied. So let's say the set number of K is, the cardinality is eta right now. Then this is equivalent to saying that the summation of this is equal to eta. And using the previous encoding scheme, the functionally preserving coding scheme, we can actually convert this into summation again. So this is equivalent to computing this guy. So now what we get is that we can actually lower this product down to a summation now. So it's not 01 anymore, but this is going to equal eta if K is inside X, and not in eta otherwise. So in fact, we can actually lower this down into a summation again using other techniques. So this will be a linear function in the end. And using this linear function, we construct our lattice space prediction encryption scheme for the multi-dimensional and quality predicates. And as conclusion, I guess the main takeaway of my presentation was that the idea is that a lot of cryptographic scheme embeds a predicate in a very implicit manner. So by detaching the predicates from those cryptographic schemes and embed these predicates into shallow arithmetic circuits and bringing that back to the cryptographic primitive, sometimes we can get a very efficiency gain in the actual construction. Thank you for listening. We have time for questions. There's no question, so let's thank the speaker.