 So thank you very much for the introduction. Can you hear me? Yes. Okay. So I'm going to start the presentation with a quick introduction on a momorphic encryption. I will also present, of course, this scheme here. We improved and it's bootstrapping and I will show you the improvement we did and I will try to finish with implementation results and open and future works. So since we're talking about a momorphic encryption, what is a momorphic encryption? So it's a family of encryption scheme that allows us to perform computation on encrypted messages without the needing to the crypt or to know any secret. So in the general case, imagine that we have two messages encrypted and in this slide, the encryption is represented by the green box. What we would like to do is to add those two ciphertexts to momorphically or multiply them momorphically and we would like to retrieve as a result the sum as ciphertext encrypting the addition of the original messages or their multiplication. Momorphic encryption is very interesting mainly because of the large amount of application it can solve. We can think about electronic voting or any kind of operation on sensitive data such as medical data, genomic data, financial data, et cetera. So a momorphic ciphertext contain some noise that at the beginning is very small when we encrypt them. The problem is that every time we perform a computation which is an addition or mainly multiplications, this noise starts growing and if it reaches a certain amount after a certain number of operation, if we don't control this noise, the ciphertext will not decrypt in a correct way. So in this case, we have a limit on the number of operation we can perform and we talk about level of momorphic encryption. So this limit will be imposed by the parameters of the schema and in case we would like to evaluate larger computations, we just need to increase the parameters. So this noise problem was solved in 2009 by Craig Gentry that proposed that technique called bootstrapping which manages the noise growth by evaluating or momorphically evaluating the decryption circuit. So thanks to this technique, we were able to have a fully momorphic encryption so to potentially evaluate any possible function. But the problem of bootstrapping is that it's a very expensive operation. So the work that followed the one by Gentry tried to, of course, propose new schemes but also to improve this bootstrapping technique. And the work we do is going to do the same. So we have the scheme I will talk about, Hian, is a LWE-based scheme. These are some of the LWE-based schemes that we have in literature. All of them have multiple variants. And Hian, many of them have also an implementation that comes with. And the schema Hian, which is the last one in the list, is one of the newest scheme proposed. So it was proposed in 2017 by Cheon et al. And in 2018, it was improved with the bootstrapping. So it's also implemented and the implementation is in open source on GitHub. So what we did in this paper was to study in detail the schema Hian, his bootstrapping, of course, and to improve the bootstrapping. And the technique we used to improve the bootstrapping could be also used to improve other homomorphic computations. So Hian is a shortcut for a homomorphic encryption for arithmetic of approximate numbers. So as I said in the homomorphic ciphertext, we have some noise, which is contained. And in the case of Hian, the noise is considered as a part of the error that is generated during the homomorphic computations. So the scheme is proposed in the beginning as a level scheme, so it can support a certain amount of multiplications. The ciphertext at level L is expressed with respect to a ciphertext modulus QL, and the message encrypted is very small with respect to this modulus. And what is important to know is that the decryption is just an inner product between the ciphertext and the secret key, then reduce modulus, this ciphertext modulus. So the result of this decryption will be the message plus the error. But since in a hand the error is part of the computational error, we can say that it's just an approximation of the message. So the scheme is leveled, as I said, so we can perform just a certain amount of multiplication because H multiplication consumes a level. So after L levels, we have to stop. The schemes encode the complex rounded plaintext, and in particular it can support, like many other homomorphic schemes, a packing technique. So to pack together multiple of these messages and to perform batched computations. So I will call the polynomial representation the cof representation and the slot representation on the complex number, the slot representation. So as I said before, the bootstrapping wants to refresh noisy ciphertext by evaluating the homomorphically the decryption circuit. In Aeano, we will as well evaluate the homomorphically the decryption circuit, but the goal will not be to reduce the noise, but just to have again new levels to perform proper homomorphic computations. So the homomorphic decryption, the homomorphic circuit for Aeano, as I said just a few minutes ago, is an inner product between the ciphertext and the secret key, reduce modulo q. So if we don't reduce modulo q, what we retrieve as a result of this inner product is an addition between the message and q multiplied by a factor e. So the idea of the bootstrapping is the following one. Let's try to see the ciphertext on a larger modulus, big q. So the decryption will just consist in a modular reduction in order to obtain again an encryption of the message m. So in order to evaluate the modular reduction, we will evaluate a function that approximates this modular reduction. And this function, I have to be close to the identity nearby zero and q periodic in order to approximate correctly. So in the original paper, the authors proposed to evaluate the scale design function, which is represented by this figure in my slide that I took from their bootstrapping paper. And the function, the formula is on the bottom. Okay, so just to summarize the bootstrapping steps, we start from a low level ciphertext encrypting the message m. As I say, we will go up to a larger level that I will call ql from now on. On this higher level, the same ciphertext will encrypt not only the message but the message plus q times e. And we would like to evaluate the modular reduction. So since the ciphertext encode multiple message, packs multiple messages together, we will start by evaluating a co-opt to slot operation, which goes from the coefficient, so from the polynomial representation to the slot representation. We will then evaluate the sign function, which is an approximation of the modular reduction. And then we will finish by coming back to the coefficient representation. So the idea is that if we choose the parameters in a proper way, the arriving level ql will be larger than the original little q. So this will allow us to have more levels left to perform proper homomorphic computations. Okay, so how does it work, the sign evaluation in the original paper? So what they do is they perform it in two steps. They start by evaluating the scaled exponential function and then they retrieve the sign function by just extracting the imaginary part. So in order to perform this in an efficient way, in particular the exponential evaluation, they do it in two steps. They start by evaluating a low-degree Taylor polynomial of a low-degree d0 that approximates the exponential in a precise way on a very small range. And then they obtain the desired precision on the larger range by doing repeated squaring. So the total degree of the polynomial they evaluate is the product between d0 and 2 to the power of r, which is in their case about 1,000. So in our work, we decided to take a different approach. Instead of passing through the exponential, we go straight on the sign evaluation and we approximate the sign evaluation by using the Chebyshev interpolant. So the Chebyshev interpolant is represented by the formula in the slide and it's just a linear combination of Chebyshev polynomials which are recursively computed polynomials on the ciphertext and they are multiplied by those coefficients which can be pre-computed. So why using Chebyshev instead of the previous technique? Mainly because we can have a better precision by consuming less levels. So to give you practical numbers, if in a hand they needed a polynomial of degree about 1,000, in our case we need a polynomial of degree about 100. And in order to be able to evaluate this Chebyshev interpolant efficiently, we decided to use a modified version of the Patterson-Stock-Mayer algorithm mixed together with the baby step-j and step-technique that need to be adapted for the Chebyshev setting. So we are able to evaluate this polynomial in about the square root of the non-trivial multiplication between ciphertext. So the second improvement we have is in the linear transforms. So cof to slot and slot to cof are linear transforms and they are performed on every bootstrapping and they are the most costly part of the entire evaluation. So we observe that the linear transforms can be compute as with FFT looking like algorithms which are composed by multiple levels and in every level we have a certain number of rotation and scalar multiplication. So each level in practice is evaluating all together, batched together, multiple butterfly operations. So in the slide, as you can see in the blue box, I represented this linear transformation. So we have k levels and in every level we have two rotations. A rotation on the left, a rotation on the right and free scalar multiplications of the ciphertext times A, I, B, I, C, I. So levels are very important in homomorphic encryption so if we can consume less level it will be better because those levels will be used later on proper homomorphic operations. So an easy idea to reduce these levels is to collapse together some of those levels. As instance in my example, if I collapse together two levels into one so I will take Vi plus one and Vi. I will express Vi plus one in terms of Vi minus one. So instead of having k levels I will have k over two levels, so the alpha. But the problem is that the complexity starts increasing because instead of having four rotations I have now six and instead of having six multiplications I have now seven. So of course there exist two extremes. All the levels are evaluated and the complexity is really, really small or I reduce all the level collapsing them in a single one and the complexity will be very large. So we try of course to find a trade-off in order to have a better evaluation and in order to do that we decided to use some dynamic programming that helped us decide the collapsing strategy and also the collapsing point. So in the image I show in the slide you have on the horizontal axis the consumed level that we decide to use. In the vertical axis the corresponding complexity and you can see that if we decide to collapse everything on a single level the complexity will grow very fast and every time we decide to use one more level complexity is reduced. Okay, so those are the improvements. We try of course to implement all this stuff in order to see what was the practical impact of these improvements on the bootstrapping. So we compare our results with two other results. Of course the original Heian Bootstrapping paper proposed in 2018 that I will call in the following Heian Boot and in the meantime this Heian Bootstrapping has been improved by the authors in an implementation and I will call this improved implementation Heian Boot Plus. So we implemented our code on Heian Boot Plus and we tested it on a common laptop. So I will present you two implementation results on two different parameter sets. The first one is this one. On the higher part you can see the Heian Bootstrap original results and the improved results by the implementation and then our results. So if you check the linear transform column you can see that in here we have huge improvement in terms of timing and this improvement is even more evident when you go watching the amortized timing the amortized timing. So amortized timing means the timing for the bootstrapping for a single slot. Remember that every ciphertext encrypts multiple slots, multiple messages. So in here we can see that we have at least a factor 5, 10 of improvement compared to the original improved implementation and even better on a larger parameter set we can observe that the improvement on the amortized time in this case is even more impressive. So of course we are using larger amount of slots compared to the original Heian. This is because thanks to the fact that we are using this FFT-like algorithms we are able to manage in a better way the complexity. So we were able to go on more larger amount of slots. Okay, I will conclude by summarizing what we saw in this presentation. So as I said we improved the Heian Bootstrapping. In particular we improved the two parts of the Heian Bootstrapping, the sign evaluation and also the linear transform so coefficient to slot and slot to cof. And I show you some implementation results which prove that our improvements are concrete. About future work, as I said in the beginning the techniques we proposed are not used only to improve the bootstrapping but can be used to improve operations. In particular, Chebyshev Interpolant could be used as instance to improve some functions such as the sigmoid function or the reload functions which are largely used in machine learning of encrypted data. The same improvements we proposed for the linear transform could be also used to improve the evaluation of the discrete Fourier transform on homomorphic data. And an open work which would be very interesting to check on is to try to implement this new bootstrapping improved bootstrapping on the new RNS version of Heian which has already proved to be more faster compared to the original implementation. So I think this is all I wanted to say. Thank you very much for your attention. Since I will use the microphone then please walk to that mic. I have two questions regarding how to transform improvement. The first question, there was a paper published a little bit later in E-Print by the Seoul National University proposing an FFT-like technique. Have you had a chance to examine both approaches yours and the one that was proposed in that paper? Can you repeat the question? I lost you from the middle of the question. The FFT-like technique that was applied to Heian bootstrapping by the National Seoul University I'm wondering if you've had a chance to compare the two approaches the complexity of two approaches and are they very similar or you've seen some differences? The paper was I think published on E-Print after our paper. So we didn't have the chance to compare it in another paper. From what I remember they use a similar technique to improve the DFT. I think their main goal was to improve the DFT. So they didn't apply any of this technique on the bootstrapping. What else? The idea of improving the DFT is similar but I cannot tell you any more precise information. Thank you. I have another question about the linear transform part. It looks like the analysis that was conducted on the number of levels used assumes that all rotations basically take the same time but there was a recent work in crypto 18 showing that you can use hoisting and certain rotations can be done much faster. Did you consider that the analysis of linear transform versus FFT like technique may slightly change and the choice of parameters if rotations are not treated equal in other words in terms of efficiency? Can you repeat it again? Sorry. In the analysis of linear transform versus the FFT technique I think your assumption is that all rotations take roughly the same time but sometimes you can use the hoisting technique and have much faster rotations up to one order of in magnitude. Have you considered applying the hoisting technique to improve the linear transform in your case? We didn't consider it in the paper but I will surely take a look to it. It's a very interesting question so I will check it after this talk in the following. Thank you. You mentioned that using a RNS is a future work but on the implementation slide you said you're using numbers that are like 1000 bits long so what are you using for the long integer is the same thing that they used in the original implementation. It's not RNS. What is it? I think it's large numbers. Sorry. If there is another question then let's thank