 We more and more rely on the cloud for everything. They're using the cloud for storing more data, for using apps, for accessing services, because this is very convenient. At the same time, what we see are data breaches. And that can be very damaging for users because the data that is leaked can be private or sensitive. One way to address that issue would be to rely on fully homomorphic encryption or FHE. What is exactly FHE? So FHE is encryption, meaning that there is a key to encrypt and to decrypt. But there is more. So there is also public evaluation key that allows anyone, given the encryption of X, to get the encryption of F of X. And what you see now is that everything is end to end encrypted. So data is encrypted at rest during transit and even during its processing. So in the case of data leakage, so at best the attacker would get access to encrypted data. In the setting I'm showing here, so you see that the same key is used for encryption and decryption, but of course, so we can do the same in the public key setting. And in the case of FHE, so there is a very efficient way to convert any secret key FHE into a public key one. The main issue, so if you want to build or to use FHE is the noise. So today, all known instantiation of FHE make use of noisy self-text. And this is for a security reason. And the thing is that if the noise present in a self-text becomes too high, so at some point, so the self-text won't no longer be decryptable. So it's very important to control the noise and there are no ways to do that. So here is just an example. So assume that there is some private data X and you'd like to compute K times X. So there are several approaches. The basic one would be to first get the encryption of X. So in this case, it's an FHE and then you compute K times that's the self-text and that give you the encryption of K times X. But there is another way. So instead what you could do is first to decompose K, so the scholar, then you obtain all the self-text of B to the I times X. So B is the radix used for the decomposition. Then you combine all the self-text to what you do. You compute a multi-sum and the weights of that multi-sum are just the digit of the decomposition. And I mean, that's quite easy to see. So that will also give you the encryption of K times X. So what's the advantage of the second approach compared to the first one? So the main advantage is the noise. So in the first case, so if we look at the noise in the resulting ciphertext, so we see that it is proportional to K square. Whereas in the similar approach, the noise becomes proportional to the square of eight digits. So when you take the sum, and that's quantity, so the second one is smaller than the first one. So we're getting the noise, so second approach is much better. So what I'd like to do in this talk is to find the best possible decomposition so as to minimize that quantity. So that quantity, so the sum of the square of the digits is called the Euclidean weight. And the goal is to find the decomposition that minimize that value. So in this case, the digit can be positive or negative. So in the range minus B minus one up to B minus one. And actually, so we already know a couple of other pretty good decomposition. So for example, when the radix is equal to two, so you know that we know that the NAF, so the non-adjacent firm, so that one has the maximum number of zero digit into its decomposition. So you cannot get something better. In the case of another radix B, so what we know is that if you decompose an integer in the range minus B minus one over two, up to B minus one over two. So in that case, so the firm is balanced and the weight is also, we can show that minimally in that case. So the difficult case is when B is even and larger than two. But here is a very useful observation. So assume that you are given such decomposition, so in radix B, what you can do, so you can flip one digit. So for example, I can flip that one. So I just take the opposite value and for that, so we just have to propagate the sign of that digit to the next digit. So here is an example, so take for example, B equal to four. So two two is a valid decomposition for 10. So if now we flip that digit, so the first two, so we get another valid decomposition for 10, so two minus two and one. But you can also flip the first digit, so the this significant digit, in which case you get minus two, minus one and one. And actually, so we can show that that last firm has a minimal Euclidean weight. So what does that example tell us? So what we'd like to do is when we have a two followed by another two or a two followed by a minus two, what we'd like to do is just to flip the digit. And actually, I mean, that's a pretty good intuition and the way to get enough is almost doing that. So just flipping the digits, so when it is a B over two or larger value, depending on the next digit. So this is the general recording algorithm. So the input is some integer K and the output is the B enough of K. So just a decomposition of K, so using digits in the set minus B over two up to B over two. So you see that there is a while loop and at each iteration, so what we do, so we extract one digit of the value of scalar K, then we update scalar K, and then depending on some condition, so we'll flip the digit on us. So again, so let's take a look first at our special cases. So first when B is odd, so radix that is odd. So in this case, so the setting of B over two is covered by that case. So in that case, so that condition becomes simpler and we just get that. And this is actually the way to get the balance form for B being odd. Another case, so when B is equal to two, so in this case, so digits cannot be larger than one, strictly larger than one, and so only the second close of the condition will apply. So if we now take a look at that one, so this is equal to one when B is equal to two, and this is also equal to one. So what does that mean? It means that if the digit we get is a one, and that the next one, we also be a one, then we'll flip. So it means that that one becomes minus one. When you update K, so K will become even, and at the next iteration, you see that you'll get digit equal to zero. So just taking mode two operation. So what does that mean? It means that if you have a one followed by another one, then in the next iteration, what you get is a minus one followed by a zero. And so you see that the non-agency form is what we get because the product of two adjacent digits will be always equal to zero in all cases. Okay. So that's the general case. So when B is even, so in that case, we need the two closes and we'll flip when the digit is larger than the basis over two or when the value is the setting of B over two and the next digits is larger than the basis over two. So this is the general recording algorithm to get the BNAF. So it works for any integer and any redx B. So you see that it's pretty efficient and actually quite easy to get the BNAF decomposition of an integer K. So what we show in the paper and this is really the main result is that every integer has a NAF. So we have an algorithm and that NAF is unique. And more importantly, so we prove that the BNAF has the minimal Euclidean weight among all modified redx presentation. So meaning all representation using assigned digits. So you cannot do better. We also studied the distribution of the digits. So assume that you have a uniformly random BNAF and if you take one digit K a prime in that sequences of digits, so it will satisfy that distribution. So you see that digits zero as an occurrence probability that is higher, digits B over two or minus B over two as a lower occurrence probability and all the other digits have a probability of one over B to occur. When the redx is odd, so you see that all digits are equiprobable. So from that, so we did compute the expectation and what is nice is that the expectation is equal to zero. So it means that we have a centered distribution and we also computed the variance which is given by this expression. So in the paper, so you can also see the exact distribution of an n-digit integer. Actually we can extend what we have done for integer to modular integer and this is essentially the same result. So we can get the modular BNAF from the integer BNAF and again, so we can prove that BNAF exists are unique. So this is almost correct. So from that definition, so you see that when the first recorded digits is B over two or minus B over two, then you can flip the digits. So in that case, there are two possible BNAF and we also have that important property that the BNAF has minimal Euclidean weight and this is key for application. So that property sets the minimal Euclidean weight so it can be used in many, many application. So in this case, I'll focus on FHE. So something that is used in FHE to control the noise is to make use of a gadget decomposition. So this is just a way to decompose a scholar into a summarized base decomposition. And in the case of LWE ciphertext, so what really matters is the Euclidean weight. So we have to get the smallest possible value for that weight to get the best possible noise control. And because the BNAF is optimal, so what you have to choose for that inverse transformation corresponding to the gadget decomposition is to use that BNAF construction. One application of that gadget decomposition is the key switching. So key switching is just a way to convert a ciphertext under a key into another one using another key and possibly another set of parameter. And this is done using key switching keys that are just encryption of key digits that are then scaled by your property. So using that gadget decomposition. And again, so because BNAF is optimal, so this is a very good way to limit the noise when doing that key switching operation. Another operation, so FFT. So as you know, FFT is a very good way to get fast polynomial multiplication. And what has been observed is that when the representation is balanced, which is the case for BNAF, so all the error relating to floating-point arithmetic are lower. And so this is then useful to use FFT, so which is, for example, using few or in TFHE. So to use those BNAF just to get reduced value of the round off error. So to conclude and as a summary, so in this talk, that's new form, so the BNAF, we show that the NAF always exist and is unique, that it is optimal, so meaning that the Euclidean weight is minimal, we also saw the digit distribution and we did cover a couple of cryptographic application. So if you want to know more on the topic, so I just invite you to check out the paper that is available on print. Thank you.