 Hi everyone, I'm Dongwook Kim and I'll talk about flexible and efficient VAPI computation on encrypted data. This is a joint work with Alexander Boyd, Ignacio Cascuto, and Dario Fiore. Let me start with the motivation of our work. The motivation is an outsourced computation where a client outsources its data and computation to a server. Then the server gets back the result. In this case, we can think about two security concerns of the client. The first one is a data privacy that the client doesn't want to expose its data. The second one is computation integrity that the client wants to be guaranteed that the result is correct. Our goal is to provide a solution to these problems with an efficiency. Here, efficiency means that the computation, the storage, and the communication usage in this protocol should be as small as possible. On data privacy, fully homeofincryption can be a great solution. With homeofincryption, one can compute any function on encrypted data without decrypting it. If a client outsources its data which is encrypted with homeofincryption, the client can't get a result without exposing the data to anyone. Following Gentry's blueprint, many work have improved the efficiency of homeofincryption, and most schemes are usable in practice. We remarked that client is efficient since it only performs encryption and decryption while a server's computation burden has been improved. On computation integrity, verify for computation can be a good solution. In verify for computation, a server provides a proof for the computation and a client can verify the result with it. There has been many work on VC, and now we can see that they are used in practice as snarks. On efficiency of this solution, the verification is usually efficient, independent of the computation, and server's burden has been improved. Then what is the solution for both privacy and integrity? There are only a few work on this, and only FGP14 provides a solution with FGP14. The solution of FGP14 is to outsource the computation with fully homeofincryption, then verify the fully homeofincryption computation with verifiable computation. This can be a quite simple solution, but the problem is to design an efficient VC for the fully homeofincryption computation. Only a few work provide a solution to this, and our main contribution is to provide a solution to this problem. More precisely, we can compare the previous work with ours as follows as a VC on FHE. At first, our solution can verify FHE with any self-text modules while the previous one work only with prime modules of size bigger than, for example, 256-bit for 128-bit security. It enables more flexible choice of FHE parameters and gives more efficient usage of FHE. We also generalized the previous solution so that we can verify non-deterministic computation on encrypted data. Technically, the main point is the introduction of our new homeofcache functions, which will be explained later. We also adapted GKR protocol of rings to verify the FHE computation, and also provided a model for VC on encrypted data for non-deterministic computation. Now, we will talk about the basic syntax and generic scheme. At first, verify the computation is composed of four algorithms, and the preprocessing algorithm generates a CRS for a function F, and verification algorithm verifies the computation result with the proof generated from the proof algorithm. VC satisfies correctness, soundness, and succinctness property. The soundness says that an adversary cannot pass the verification with an incorrect result, while succinctness says that the proof size is small, and verification is faster than computing the function F. The homeof encryption is composed of four algorithms called keygen, encryption, eval, and decryption. The correctness property ensures that the computation on self-text is homomorphically preserved to the computation on the message, and the self-text also satisfies the user in the CPA security. Now, we can introduce the generic scheme of FGP14, which describes the V5 computation on encrypted data. The generic scheme is composed of setup, keygen, problemgen, compute, verify, and decode the algorithm, and it actually describes the scheme at the figure on the right side, which verifies the FH computation with VC. We can see that this generic scheme, concentrated from VC and FHG, satisfies the required properties such as privacy, integrity, associability, from the properties of VC and FHG. Here, the main problem is that VC for FH computation can be efficient. Let me explain a bit more about the efficiency of VC on homeof encryption computation. Usually in homeof encryption, the self-text is much bigger than the plaintext, and computational self-text is also much heavier than those on plaintext. For example, in BV homeof encryption scheme, self-text operation is at least D times bigger than those on the plaintext, and this D can be 2 to the 11 or 15. Therefore, if we apply VC on homeof encryption directly, it is also that D times costly than VC on plaintext, and it makes the overall VC scheme on encrypted data not efficient for practical use. Therefore, it is inevitable to design an efficient VC for homeof encryption computation for efficiency. A solution to this problem is to apply VC on the image of homeof cache on the HG computation. Here, the homeof cache is a ring homeorphism which preserves the self-text addition and multiplication, and the range of homeof cache written as DH can be much smaller than the domain the self-text pays. Therefore, VC on this image can be much more efficient than that on the self-text. Since this hash is homeof it, if the result is correct, then the homeof image is also verified to be correct. However, an adversary can send a wrong result which has the same image under this hash as the correct result. To prevent this, a verifier samples a hash from family of hashes which satisfies an absolute universal property. This property guarantees that if two self-texts are different, their images under the hash are also different with high probability, given that the hash is uniformly sampled from the family of hashes. With the previous idea on homeof cache, we can describe our VC on encrypted data. This was also implicitly exploited in the previous work. After approval or server gets back the result, a verifier or client samples a hash from family of hashes, then sends it to the approval. Then, the approval provides a proof on this image of hash on the self-text computation. Finally, the verifier can verify this image that confirms that the result is correct or not according to this verification. We can see that given that the hash is sampled from absolute universal hash family, the success probability of an adversary in shooting is less than the absolute. Also, the cost of VC can be reduced significantly, and we checked it with our instantiation, which will be explained later. We also remarked that the interaction between verifier and approval can be made to be non-interactive with VR-Shamu-Huristik. Now, let me introduce the instantiation of our VC scheme on encrypted data. For homeof encryption, we can use BV-Homeof encryption, and for verifier computation, we can use GKL protocol. Our main contribution, a construction of homeof cache, will be mainly explained. At first, let's recall the BV-Homeof encryption scheme briefly. Let capital phi be a cyclotomic polynomial, and let R sub t or R sub q be a polynomial ring mode t or mode q, quotiented by phi. We will focus on the self-text space, which is composed of polynomials in Y with coefficient from RQ. Then, the self-text addition and multiplications are just the addition and multiplications on this polynomial ring, which is equivalent to the ZQXY mode phi. Also, we assume that mode phi will be done later, then we can regard the self-text space as simply ZQXY, and in other words, a two-variate polynomial on ZQ. Therefore, when we consider the homeof cache, the domain of hash is simply ZQXY. Finally, we recall that in this homeof encryption, to compute function F with more multiplicative depth, the size of Q must be increased for correctness, and the degree of phi must be increased for security. Now, let me introduce our homomorphic cache on the self-text space ZQXY. As I told, we can simply consider the function F hat on this space ZQXY, and the F hat is composed of addition and multiplication on this space. Then, what can be a homomorphic cache on this space? In the previous work, one used an evaluation map which outputs an evaluation of constants alpha and beta on the input polynomial ZXY. This is simple and useful, but it works only when the self-text modulus Q is prime. In our work, we generalize the previous work and propose a homomorphic cache which also works on the self-text modulus with Q a power of prime. In fact, our homeof cache is a generalization of the previous evaluation map. For an input polynomial ZXY, we substitute Y with a polynomial RX in X, then modulate reduce the polynomial with another polynomial HX. Then, we can see that the previous evaluation map corresponds to the case with RX equal to beta and HX equal to X minus alpha. We can easily see that this hash is homomorphic. However, determining where and when this hash family can be absolute universal is more complicated. To give a solution, we need some facts on the Galer-Ring and Schwarz-Diploma. At first, recall that a Galer field is an extension field which is a ZQ polynomial, questionned by an iridescent polynomial HX, where Q is a prime. The Galer-Ring is an analog of the scalar field when you consider the Z sub P to the power instead of ZQ. Recall that a Schwarz-Diploma states that a non-zero polynomial of degree D can have as and most D zeros in a subset A with some property. With this, one can show the absolute universality of the hashes when Q is prime, since it is in fact equivalent to the statement that every non-zero polynomial will not be evaluated to be zero with high probability. To show the absolute universality in our case with power of prime modulus, the same argument can be used. However, the problem is that the size of A in the final ring is quite small. Therefore, we express the Galer-Ring which has enough size of the set A. Then, with this, we can show that the absolute universality with degree over absolute. Now, let me describe our home cache on ZP to DEXY. For our hash to be the absolute universality, we sample hx from iridescent polynomials. Then, the intermediate ring ZP to DEX, quotiented by hx, is our Galer-Ring. Then, we sample rx from the set A of this Galer-Ring. Then, with the Schwarz-Diploma on this Galer-Ring, we can show the absolute universality. In fact, this is just an overview and I recommend to see our paper for detailed proof. Here, we remark that there are plenty number of hx and rx on all prime p if we increase the degree of hx. Therefore, we can set our homomorphic hash to satisfy absolute universality with negligible absolute. Note that the degree of h is not very large. It can be similar to or less than the security parameter lambda. One important and interesting detail is that we need to provide a publicly sampling process for the iridescent polynomial hx. This can be done by simple rejection sampling, where one samples our random polynomial then checks if it is iridescent. In our work, we also provide more efficient sampling process for h, which uses much less random coins than the Naive method. With this, the sampling of homomorphic hash can also be made to be non-interactive with fiat-chamber heuristic. Finally, we can see that our homomorphic hash is good, except that the range of the hash is a Galer-Ring. Since we have to prove and verify the image of the hash, we have to provide a V5 computation that works on this Galer-Ring. For this, we propose to adapt the GKR protocol on the Galer-Ring SRVC on this Galer-Ring. At first, we call that a GKR protocol, which is introduced by Goldwasser, Karlein and Rappdrom, is a kind of V5 computation called an interactive proof. With this protocol, with interactions between verifier and prover, they can prove and verify the evaluation of ZP arithmetic circuit or computation composed of addition and multiplication on the final field. A protocol can be made to be non-interactive with fiat-chamber heuristic. The original protocol does not use any cryptographic schemes, and its soundness only depends on the short strip lemma on the final field or ZP. Therefore, one can naturally think of this GKR protocol working over the Galer-Ring, since the short strip lemma also holds in the Galer-Ring. The protocol description is the same as the original one, and the only difference is that every element is from the Galer-Ring instead of the ZP or the final field. Then, with this protocol over the Galer-Ring, one can directly prove and verify the computations over the Galer-Ring. I also remarked that the degree of H or the degree of Galer-Ring can be set similarly as the case of home of cache to make the soundness probability negligible. Finally, I can give the summary of our instantiation of VC or encrypted data. A verifier sends bvciftext and a computation function to the prover, then the prover gives back the result. Here, note that the ciftext and the result is regarded as two varied polynomials of our integer's modular prime power by delaying the modular deduction by phi to the end. Then, verifier samples our home of cache, which transforms the above ciftext and computations to the elements and computations of a Galer-Ring. In response, the prover sends a proof on this Galer-Ring with the dk protocol on this ring, then the verifier can verify the result. Finally, the verifier gets the result by computing modular phi, then decrypting the ciftext. Then, finally, the degree of H can be set according to the security parameter lambda. Now, I'll give the performance of our VC on encrypted data. Recall that our VC on encrypted data runs as the above figure. Then, we can see that the time complexity of the verifier and the prover are composed of hash evaluation and cost for verifier computation on the range dh of the hash. For more concrete analysis, we can consider the instantiation with subfunction f, whose degree is capital D and number of additions and multiplications are s. Then, assuming that we used our home of cache and dk protocol over Galer-Ring, the cost of verifier and prover, measured by the number of zq operations, can be summarized as these equations. I note that since modular phi is delayed to the end, the cost accompanies a term quadratic in D, while the original VV encryption would give a term linear in D. Still, the time complexity for prover can be even less than the time complexity for homomorphic computation, which shows that our scheme is quite efficient. We can also see the performance of our VC more concretely with an example circuit. We can think of two example circuit, which computes an inner product of two vectors or which computes the parallel evaluation of a polynomial on multiple input. The parameters for our scheme is given in the table. Let zqx, questionted by phi, be the safe-text space, and let zqx, questionted by h, be the range of home of cache. The performance improvement can be estimated by comparing the degree phi to the degree dh of h. The range of hash is about 15 to 240 times smaller than the safe-text space, and therefore our VC is at least that times faster than applying user VC directly on the safe-text space without hash. If one uses the previous work, fn between, the problem is that we must take low q to be bigger than 250, and hence the degree of phi of the fh must be bigger than 2 to d14, which means that we have to use less efficient fully homomorphic encryption parameters, even though it is not required for the fh computation. I finally mentioned that this parameter assumes the worst case where the prime p is 2, and if prime p is bigger than 2, the degree dh can be much smaller, and our VC can be more efficient as well. Finally, I will give a short overview on our VC scheme for non-deterministic computation and context hiding. Non-deterministic computations means a computation where a prover can enter its additional input, and context hiding means that the decryption of such text does not leak additional information than the message it has. Non-deterministic computation and context hiding property allows the VC shown encrypted data to be used in more diverse cases. For example, we can think of the case where a party prepares an encrypted data, and the party decrypted results are different. We provide a model for VC on encrypted data with homomorphic hash that comprises the non-deterministic computations. Roughly, this was done by generalizing the previous work, ff20, that performed noise-flooding by using public-sized inclusions of 0. Then we also combined this with homomorphic hash. I will refer to the paper for the detail. Finally, I will end this talk proposing some open problems. At first, it will be good if one can provide an efficient coming-up proof argument for homomorphic hash evaluation, or more generally, computations offer a gallery. In our paper, we only provided a generic VC scheme for a non-deterministic computation with homomorphic hash, and on efficient instantiation of this scheme, it will be possible with an instantiation of those argument systems. At second, our instantiation uses BV inclusion scheme, but this homomorphic inclusion scheme is not very actioned as the current leveled homomorphic inclusion schemes. The problem is that our VC should support other operations than addition and multiplication to come up with those leveled homomorphic inclusion schemes. It will be very interesting if one can propose an efficient VC scheme with sociability. Finally, we expect that our VC scheme is still very actioned for limited use cases, and it will be fascinating if one can find a good application. Thank you very much.