 Do pocketsa, do rečene in da pa vsem na to, da se češen na začetku. Tako, da se se začasnimo, da se zeloovimo in da se zeloovimo. Vse njo se izgovorilo, da se so način da se odličilo, da se da so odličilo. To je prišlo, da se je izgovorilo, da se začasnimo. I ta pa je, da se je, da se je, da se je začasnimo. Zvok je, da se je, da se je, da se je, da se je. tudi nekaj. However, I'll be brief in terms of just what it is all about. So fully homo reken encryption is an extension of the standard encryption, cryptographic primitive. Standard encryption can be used to protect data at rest or in transit. You can use it to securely store your data on a disk, or you can use it to secure data being transmitted over the network. FH allows not only to secure data and leave it there, it allows to perform computations on encrypted data without knowing the decryption key and in fact, without ever decrypting the data. So this opens up at least theoretically many interesting possibilities about, for example, securing outsourced computation where you load both your data and program in encrypted form on a cloud server and you have the server running all the computations, obtaining the result in encrypted form and sending this result back to you. So in terms of timeline, the idea of homomorphic encryption dates back to a long time ago, already in the late 70s, Rivest, Adelman and Dertuzos suggest the idea that such a thing could exist, but it remained a major open problem in both theoretical and practical cryptography for several decades, until Craig Gentry achieved a major breakthrough in the area in 2009, when he suggested a first candidate construction that could be a fully homomorphic, secure fully homomorphic encryption scheme, but more importantly, introduced a certain boost trapping technique that was key to achieve the result for his candidate construction in the first place, but also turned out to be very influential and used in all the other subsequent constructions that were discovered after his. And so there's been really a lot of subsequent work after this first result. One may argue that the main achievement of Craig at the time was not finding a first candidate, but was to convince the cryptographic community, as ISER, membership, that the problem had a solution. Till before his work, the community was divided between people that were trying to prove that the problem is impossible, cannot be solved, and people trying unsuccessfully to find a solution. So, and even if the candidate solution as first proposed by Gentry was not quite ideal, but in terms of security and efficiency, it really marked a major change in attitude. A lot of people started working on positive constructions, and there's been a lot of work in improving both the security of the schemes, putting them on the solid basis of standard cryptographic assumptions. As much as lattice assumptions are considered standards, I think by now they are fairly standard. And efficiency improvements, there's been much progress by several order of magnitudes about the speed at which you can perform these computations on encrypted data. And these efficiency improvements also led to several implementations in libraries that are there ready to be used. So among these, so these names on the bottom row, H-Lib, C-L-Paliset, F-U-T, F-H-G, Hian, L-O-L, and F-Lib, and so on. So you can see from this list that there's a great variety of concrete constructions of F-H-G, that despite all these variety of names of libraries and candidate schemes, all of these constructions at the bottom, they all rely on a small set of simple, of common, simple building blocks. So there is much more similarities between these different schemes than the difference in the names may suggest. So in this talk I will start first by giving some motivations about F-H-G, which go beyond the straightforward application to secure outsourced computation, to give you a sense of the impact that this development had in cryptography, beyond the solution of a specific security problem. And all the constructions that are known to date, pretty much all of them, they rely on lattice cryptography. And this somehow, this was a bit surprising at first. So till around the time of this discovery, lattices were still trying to reach the stage where they could be considered equivalent to other competing complexity assumptions. And so F-H-G is one of the applications that really put lattices in the spotlight as something that could be used to solve problems that we do not know how to solve otherwise. And so in this talk I will try to distill and highlight what are the features of lattice cryptography that made it so appropriate to solve the homomorphic encryption problem, which interestingly are quite different from the reason why they were first picked as a candidate solution. So originally the first candidate solution of homomorphic encryption was based on algebraic lattices because of the intuition that these lattices directly support both addition and multiplication operations, which you do need in order to perform arbitrary computations. And so it turned out that this is far from true in the sense that you can build homomorphic encryption just out of a simple, very weak linear structure of lattices. And the algebraic structure can still be very useful for efficiency, but is not really needed in terms of feasibility results and functionality. So I will highlight some key properties that are peculiar to lattice cryptography. And then from there show how you can build fully homomorphic encryption in a fairly generic modular way without any reference to lattices. So lattice cryptography and FHC in particular still as I think reputation of being quite a technical area with very involved and complex constructions. And indeed many of the more efficient constructions of FHC involve quite a bit of non-trivial mathematics, non-trivial by computer science standards. And we'll see how, just in case, I know there are mathematicians in the audience, so it's important clarification. So, and we'll see how from these basic properties, without using any mathematical property at all other than some simple linearity properties, we can build a sequence of increasingly more advanced cryptographic primitives, going from symmetric key cryptography to public key cryptography, then cryptography supporting encryption, supporting the computation of arbitrary linear functions, and then also multiplication and therefore arbitrary arithmetic circuits and general computation. But let me start with the applications. So direct application of FHC is to outsource computation in a secure way. But beyond that, FHC turned out to be an incredibly powerful cryptographic tool, which I like to think of it as a cryptographic pantograph. So I don't know how many of you actually have seen or used. It's a mechanical device that can be a pantograph. It's a mechanical device that can be used to enlarge a picture. So there is a pointer and as you trace the small picture with a pointer, there is a longer arm that will reproduce exactly the same picture but on a bigger scale. Now FHC can be used and it is being used to achieve this type of thing for many other cryptographic primitives. So this was first done by Craig Gentry with his FHC construction, where he showed that if you can first achieve FHC that is capable, not FHC, homomorphic encryption capable of computing simple functions, then you can amplify the power of homomorphic encryption and being able to compute arbitrary functions. And in the ears, and here by small, so I'm referring to a very specific property that is one of the main connection with lattices. So if you can achieve, if you can solve your problem for small functions where small means about the complexity of the decryption algorithm and that's why you need encryption schemes with a very efficient decryption for this to work, then you can do it also for other more complex functionalities. So following the discovery or invention of FHC, so this FHC was used as a tool to perform a similar type of amplification for many other cryptographic functions starting with indistingušibility of fuzkesion and then also functional encryption and then correlation intractable hash functions and there are probably other examples that I missed here in the picture and I'll illustrate a couple of these constructions to give you a sense of where FHC comes in even when dealing with cryptographic problems other than encryption. So sample application one, this is the indistingušibility of fuzkesion. So fuzkesion is a process by which you can take a program and transform it into another equivalent program which produces the same result as the input one. So if you run your program on input X, the obfuscated version outputs exactly the same value as running the original program on input X and still, so this, even if the program is equivalent, so this procedure protects the program in the sense that any two equivalent programs, p0 and p1, that produce the same output for every input, if you obfuscate one of them chosen at random and adversely will not be able to tell which of the two was obfuscated. Now, what is the connection with FHC and this pantograph type of construction? So in the first candidate construction of indistingušibility of fuzkesion, so this was achieved by first building of fuzketors for small programs, programs of the size of the decryption algorithm and assume you have such an obfuscator, this oprime here, that can only obfuscate to small programs. Now, the way you can use it to obfuscate large programs is the following. Instead of protecting your program p directly by obfuscation, you can encrypt the program p and the obfuscation of p will be the encryption of p together with an obfuscation of the corresponding decryption algorithm and you can still run this program on any input X by first running homomorphically because we are using a fully homomorphic scheme. You can run the encrypted program homomorphically on the input X, so that will produce the encryption of the result of the program, the encryption of p of X and after you do that, you can run the obfuscated decryption algorithm to extract X out of it. Now, this is a bit oversimplified the description of the construction, the actual construction needs to encrypt the program twice and then perform a consistency check in order for the result in construction to be secure, but this is the high level structure of the construction. Now, something similar can also be done for correlation intractable hash functions. Now, a hash function family h is correlation intractable with respect to some relation r. If it is hard to find an input X which hashes to a value which is in that relation with X. Now, these functions come up in a number of cryptographic constructions like the Fiat-Shamir heuristic to remove interaction in interactive protocols and Fiat-Shamir signatures and the recent construction of non-interactive zero knowledge. And it is a standard property that it used to be achieved using random oracles, but now it can be built out of standard lattice assumptions. And this is done through a boost trapping process that combines correlation intractability for small functions and then it boosts it using homomorphic encryption to correlation intractability for arbitrary functions. And the construction is the following. So the way you hash a value X is by first evaluating an encrypted program homomorphically on it and then you hash the result. And in this setting, the encryption function is completely protecting the program. So no matter which program you are encrypting here, you will not be able to see the difference. So even if this P were to be a dummy program that doesn't output anything interesting, the result will still be correlation intractable with respect to arbitrary programs also different from P. Okay, so enough for motivations. So lattice cryptography. Why are we using lattices to do all of these things? So to enable all of these applications. Lattices are very simple mathematical objects. So we already defined the three times yesterday, so I'll be brief. Lattice is a set, is a regular arrangement of points in a space. You can think of them as the intersection points of a regular grid or as the analogous of a vector space over the integers rather than the reals. And they have many attractive features. They offer a rich set of hard computational problems. They are conjectured to be hard even with respect to quantum adversaries. And they are based on operations that are fast and easy to parallelize. The main operation that you perform on lattices is vector addition. And they give rise to a powerful set of applications. FHC is one of them. So why lattices in cryptography? So the properties that we'll use from lattices is that they can be used to build a very simple type of symmetric encryption scheme which has some weak linear homomorphic properties. And I'll tell you what weak means in a moment. It's not even full linear homomorphism. It's some restricted form of it. They have a simple, essentially linear decryption function. And they are also circular secure, meaning that you can encrypt a key under itself and the result will not leak the value of the key, which is something that is not true in general for any encryption scheme. So we'll see that these three properties, regardless of the underlying mathematical construction that is based on lattices, so these are enough to achieve, to build the more complex schemes in a black box way that allow multiplication by arbitrary constants. So full linear homomorphisms, also multiplication between ciphertext and then also fully homomorphic encryption that is the ability to perform arbitrary computations on encrypted data. So lattices these days are almost invariably used in the learning with error formulations. So we'll put lattices aside and just consider this matrix version of the basic lattice problems in learning with errors or LWE. So this problem is defined by a random uniformly chosen matrix A with elements which are chosen modulo as small integer q. And we think of this matrix A as defining a function, the LWE function keyed by A that takes as input two vectors, typically small vectors with short coordinates, and it maps these two vectors, s and e to the combination as plus e. So it is a matrix vector multiplication with an addition of a small noise vectors. So these are conventionally called s and e because s serves the role of a secret key and e serves the role of an error perturbation vector. Now the parameters and size of the matrix are usually polynomial in n which serves as the main security parameter. And so this problem can be considered an injective version of another problem, the short integer solution problem that had already been suggested by ITI years before. But in this formulation it was introduced by Regev in 2005 and it was proved secure based on the hardness of, on the quantum hardness of lattice problems. And so this day the hardness is also known under classical reductions by the work of Brakeski et al. And so what Regev proved and then was extended to classical reduction is that the LWE function is one way. It is harder to recover s and e given the values a and b, a defining the function and b which is the output of the function. And not only this function it is one way, but the output of the function is pseudo random. It is computationally hard to distinguish b from a truly random, uniformly random vector modulo q. Now the pseudo randomness of b immediately suggests a simple way to use this problem to perform secret key encryption. You can use LWE to encrypt messages as follows. The idea is to use this pseudo random vector b as a one time pad to mask your message. So what we'll do is to compute b as the combination a s plus e and then add this vector to the message m. So the encryption of a message m with randomness a and e is given by a, a is some public randomness which is put in the output as part of the ciphertext and then the sum b plus m where b is the LWE vector. And so this interestingly is a simple generalization of an encryption scheme of BLAM et al that was already suggested in 93 way before we got to the point that we think FHC can be built. So at the time this was, this construction was originally proposed crypto 93, a work in modulo 2 as a construction based on the harness of the learning parity with noise problem. And this method of encrypt, this secret key encryption scheme based on LWE is a generalization of that method from work in modulo 2 to work in modulo a larger but still relatively small polynomial modulus q. And so this, so reg have suggested to use a larger modulus q primarily to connect the complexity of this problem to lattices via a reduction but using a larger modulus also proved to be instrumental for the applicability of this type of encryption to solve a larger variety of problems including FHC. So the encryption can be performed as follows. You can try to unmask that message, compute b and subtract it from the ciphertext. But there is a small catch. You don't really know how to compute b exactly. So if you know s, your secret key, by computing the product a times s, you can find something which is very close to b. It is within distance e from b. So if in the decryption function you subtract as from b plus m, you will not recover just m but you will get a perturbed version of the message. You will get m plus the error term that was used during the encryption process. Now, if you, so typically, so this will corrupt the law order bits of the message and typically you don't want decryption errors in an encryption scheme. You want to recover the original message but this can be fixed very easily using some simple form of error correction. You can scale your message up by some factor and then round the result so that the law order bits disappear. However, for the purpose of the description of FHG, it is interesting and useful to not to perform this error correction. Think of this error correction something that is done at the application level and think of LWA encryption as a form of approximate encryption where the decryption algorithm is noisy. It only provides an approximation of the message that was encrypted. Now, doing this allows to highlight the linearity properties of this scheme. Now, these are the expressions representing two ciphertexts. One encrypt in the message and one, the other one encrypt in the message and two. And these are both matrices. Now, if you add these two ciphertexts whereby addition, I mean a simple component-wise matrix or vector addition, modulo q, if you simply add them up, and you can factor out the secret s by distributivity and see that the result is an encryption of the sum of the two messages with an amount of error which is the sum of the two encryption errors. And here you get perfect equality. So, more generally, if you think of these as a noisy encryption scheme where you are encrypt in a message m with a noise which is bounded by some parameter beta and you can think of these as a procedure to take two ciphertexts and combine them together so that from the encryption of m1 and m2 you get an encryption of m1 plus m2 with a noise which is bounded by the sum of beta1 and beta2, the error bounds of the input ciphertexts. Now, this is the reason why I'm calling this a weak linearity property because as you add up more and more ciphertexts, the error that corrupts the message will get bigger and bigger. Also the errors add up and you cannot really perform arbitrary linear computations. You can only add up a small number of ciphertexts and if you are taking linear combinations, there should be linear combinations with small coefficients. If you were to take linear combinations with large coefficients, that would blow up the error and make your ciphertext undecryptable. So the second property that is key to building FHC is circular security. Now, so this is just a reminder of how encryption and decryption work from the previous slides. Encryption is taking this as plus e combination and adding it to the message. Decryption is performing a similar operation in reverse by performing a subtraction. So you may ask, what is the decryption of the ciphertext A0 or minus A0? If you simply apply the decryption algorithm, no matter what the secret key is, S, if you apply the decryption algorithm to this ciphertext and substitute, you will see that this ciphertext decryps to zero plus AS. So the plus is why I put the minus in front of the A so that when you subtract it, the minus sign disappears. So this gives a linear combination of the secret key vector. So you see that even without knowing the secret key, you can build a ciphertext that encrypts the secret key. Or in fact that encrypts any linear function of the secret key of your choice. So since you can compute this ciphertext on your own, it follows that this scheme is circular secure. Encryptions of the secret key do not leak information because they can be computed on your own without knowledge of the secret key at all. So they cannot help in the decryption process. And of course, this is not really a random encryption of the secret key. You can see that the last column is zero. But it is easy to go from here to a random encryption of the secret key simply in adding a random encryption of zero. And using the linearity property, you get that a fixed encryption of the secret key plus a random encryption of zero gives a random encryption of the secret key or a linear function of the secret key. So this is the second important property that is enabled by the use of lattices in cryptography. And a third property is that the decryption function at least this approximate decryption function that outputs a noisy version of the message is linear. So we already saw linearity. But here by linear, I mean something else. It is not only linear in the ciphertext, which is what gives the linear homomorphic properties. It is also linear in the secret key. Now, if you define a sort of extended secret key, which is your secret vector s or minus s extended with an extra coordinate 1, then the approximate decryption process becomes exactly a matrix vector multiplication where the matrix is the ciphertext AB. And the vector is this extended secret key vector. So decryption is linear in the secret key. Not only in the ciphertext is also linear in the secret key. Now, this is important because it allows the following very interesting operation. If instead of decrypting under the secret key, you decrypt under a multiple of the secret key, you will end up recovering not the message, but a multiple of the message. And also the error gets multiplied, which can be an issue if the multiple c is big. But we'll see how to deal with this issue soon. So this only holds for approximate decryption. So if you do the rounding, the rounding is not linear anymore, and then you get in trouble, and we'll see how to address that at a later stage. But for now, if we deal with this approximate decryption operation, we get linearity in the secret key. OK, so let's move on. So from this point on, there will be very little lattice and LWE in the talk. So we have an encryption scheme where you can perform the following operations. You can add up ciphertexts. You can negate a ciphertext. If you simply change the sign of all the entries, an encryption of m is transformed into an encryption of minus m. In this case, the error doesn't even go up. It changes sign, but it doesn't increase. You can multiply ciphertexts, at least by small constants. And the reason is that if you multiply the ciphertext by a large integer, also the error gets multiplied, and the error may become too much. You don't want the error to become too large. You can also build the encryptions of any message of your choice without knowing the secret key. Zero m is an encryption of m, where the matrix A cancels out with a secret key. And you can also build the encryptions of linear functions of the secret key. These are all operations that can be performed without knowledge of the secret key. So what are these operations good for? So to get the first impression of how powerful are these operations, let me show you how you can go from secret key encryption to public key encryption, which is a nontrivial operation. We know it cannot be done in general for arbitrary encryption schemes, under standard assumptions. So you can build the public key as public key encryption scheme as follows. So you start from the secret key version of LWA encryption, and you define your public key to be a sequence of encryption of zeros. And since this scheme is secure, publishing many encryptions of zeros does not really leak useful information about a key. Now, given these public encryptions of zeros, if you want to encrypt a given message m, you can do it as follows. You can take a random linear combination with more coefficients of the public encryptions of zeros. But a public encryption of zero, even when you multiply it by a constant or you add them together, will still be an encryption of zero. So this allows to compute random encryptions of zeros, and if you add to them a trivial encryption of m, which you can also compute, you will get a random encryption of m. So this allows to make encryption a public process that everybody can compute using publicly available information that was computed using the secret key. So you can think of the first line computing this public ciphertext as the public key generation process. So the encryption is just normal decryption because the output of the public key encryption algorithm is a regular LWE encryption of your message, so it can be decrypted using the secret key. Now, in fact, so this method is essentially how reg have designed a public key encryption scheme out of LWE. So he proposed the public key scheme, not a secret key one, and is the one that you can obtain in this manner. And subsequently, Rotblom show that, so this idea holds in a very general sense. You can transform any linearly homomorphic encryption scheme, secret key encryption scheme. If it is linearly homomorphic, then you can transform it in a black box way into a public key encryption scheme. So the rest of the talk is largely orthogonal to whether the scheme is public key or secret key. It applies to both cases. So for simplicity, we can focus on the secret key case. And now the question is how to perform operations on ciphertexts without knowing the decryption key. So how can we multiply ciphertexts by a constant? We've seen that you can multiply ciphertexts by small constants. You can also add up a small number of ciphertexts. What if you want to take an arbitrary linear combination? Or what if you wanted to multiply a ciphertext by a large constant? Now you can, so you can not do this directly with LWE encryption, but starting from this encryption E, so E is the basic encryption scheme with the basic weak linear homomorphic properties. So you can use this scheme to build a slightly more complex scheme, E prime, that supports multiplication by arbitrary constants. So you can define this higher level, more powerful scheme as follows. And a message M is encrypted by a sequence of log Q many basic ciphertexts that encrypt M multiplied by increasing powers of two. So you will encrypt M to M for M, 8M, and so on. So it will be a small sequence of basic ciphertexts. Now this is useful because if you want to multiply this ciphertext by a constant C, you can proceed as follows. You first write your constant C in binary. So you get the binary digits of C, which are zeros and ones. They are small coefficients. Then you can multiply these component ciphertexts by the coefficients of C. So this is a small linear combination. It's a combination of at most log N terms, log Q terms, with coefficients which are zeros or ones. So in fact it is a subset sum of those ciphertexts, which is something that can be computed with small noise growth using the basic encryption scheme. And by the linear homomorphic properties, this linear combination, you can take this summation, this combination inside the encryption by the linear homomorphic properties, and you can recover the multiplier C as the sum of CIs of the digits of C multiplied by different powers of two. And at this point you have a scheme that supports multiplication by any constant. So there is a small tweak here. So you are starting from an E prime encryption of M, and you can multiply it by an arbitrary constant C, and the result is the encryption of Cm, but under a different, more basic simpler encryption scheme. But if you want, you can make this easily into an operation that multiplies ciphertexts within the fixed E prime encryption scheme. What you do is to multiply by C, not only the encryption of M, you multiply the encryption of M not only by C, but you also multiply it by two C for C, eight C, and so on. And if you do this, you will get a basic E encryption of multiples of Cm by different powers of two, which is exactly how we defined our E prime encryption scheme. And at this point we have a scheme that supports arbitrary, that supports multiplication by arbitrary constants. So the constant must be known in order to perform this operation. So the next step is how to support multiplication of encrypted data. You have different ciphertexts, you want to multiply them together. And the idea is to perform this multiplication of ciphertexts via a form of homomorphic decryption. Now remember the decryption operation is linear in the secret key. So we can decrypt homomorphically using an encryption of the key. So since the decryption is linear and we have a scheme that supports linear functions, you can perform this operation homomorphically, more specifically. So if you are given an encryption of a message M and you are given an encryption of a secret key, S prime under the scheme that supports multiplication, you can multiply the encryption of M and the encryption of S prime. And by taking a linear combination of the encryption of the key with the coefficients specified by the ciphertexts. And this will produce by the homomorphic property an encryption of M. And even more interestingly, so given an encryption of M and an encryption of a multiple of the secret key, this process will produce an encryption of C times M because the linear decryption procedure will bring this co-efficiency inside as a multiplier that multiplies the message. So we want to perform this decrypt and multiply procedure. So in order to do that, so you can think of this as one more encryption scheme, E double prime, which is built on top of E prime. So the E double prime encryption of C is defined, so C here is the message, but let's think of it as the constant that you want to multiply for. So the way you encrypt the message C is by giving the E prime multiply encryption of a C multiple of the secret key, S prime. And so you can think of this as encrypting the decrypt and multiply function. And since, notice that the encryption E double prime is still a collection of basic encryptions of multiples of C. If you just expand your operations, your more complex encryption always breaks down into a collection of basic cipher texts. And it has the following homomorphic properties. It is still linear because the components are linear so you can add them up component-wise. It supports multiplication. And when you multiply two of these cipher texts, you can use the homomorphic properties bringing the multiplication inside one of the two encryptions that will give a collection of multiples of M1 times N2 under the same coefficients that define our E double prime encryption scheme. And that's exactly the encryption of the product. So at this point we have a scheme that supports both addition and multiplication. But notice how for this scheme to be secure, we need to use an encryption function E prime that needs to be circularly secure because you are encrypting a multiple of the secret key. So what allows to define this encryption scheme, E double prime and claim that it is secure is that the underlying schemes, E and E prime, are circular secure schemes where you can securely encrypt the secret key. Okay, so at this point we have addition and multiplication and it's not quite an FH scheme because you still have noise. So when you adapt many things together, so when you adapt cipher texts, you adapt the errors. But more importantly, when you multiply two cipher texts, you will get an encryption of the product of the two cipher, of the two messages plus some error noise term which is introduced each time you run this homomorphic and multiply the encryption procedure to carry out the homomorphic product. So this noise grows and gets bigger and bigger during the homomorphic computation. So effectively we don't have an FH scheme. We have a scheme that supports only small computations. You cannot perform arbitrary computations because that would give rise to large noise and produce cipher texts that are trash that cannot even be decrypted. So here is where the boost trapping technique comes in. It's a method that transform an FH scheme for small functions. So here specifically could be the class of functions that are computable by log depth circuits which include, for example, the encryption function of our lattice-based encryption scheme. And you can transform it into an FH scheme that supports arbitrary polynomial time computations on encrypted data. OK, so back to this boost trapping idea. The main problem is that the previous methods to do computations, they never get rid of noise. Noise is always accumulating and it's getting bigger and bigger and bigger. And in order to perform arbitrary computations, we also need a method to reduce the noise, to take this noise and bring it down. So we need to go back to the idea of encoding the messages in a way that you can correct the errors by scaling them up and rounding. So we can define exact decryption algorithm as follows. So think of the message M as being a single bit. This generalizes to larger message spaces. But we are working in module OQ. You can encode your bit as the most significant bit of your number module OQ. So you are scaling your message M by a factor Q over 2. And then you add some error to it. So this is a noisy encryption of the bit M that with this multiplier. Now you can extract the message and cancel out the error by computing the most significant bit of this perturbed value. I mean, technically you need the first to shift it by Q over 4 and then you take the most significant bit. So this is our decrypt and round decryption procedure. And this function is not linear anymore. So this is the downside of correcting from errors. However, it achieves this interesting property. You can start from a message term which contains errors and it will produce a clean message that does not have any error at all. Now, of course, you cannot compute this function directly. So computing the function requires the knowledge of the secret key. So you want to keep the secret key protected. So instead of computing this directly, you can compute this function homomorphically given an encryption of the secret key and is the key of the boost trap in a refreshing procedure. So you can compute this same function, but you think of it not as a function of c of the ciphertext. You think of the ciphertext as defining the function and the secret key as being the message. Now, you are given the encryption of the message, which is here denoted with braces. So you are given the encryption of the secret key, which is input to the function. Then you compute this function homomorphically on the secret key. And this function has small complexity because the encryption and extracting the most significant bit, these are low depth operations. And these will produce an encryption of m scale by a factor q over 2, which is exactly a clean encryption of m. And the relevant property in this process is that the final result will have some noise introduced by the homomorphic computation. But this noise depends only on the noise in the encryption of the secret key and the complexity of this decrypt and run procedure. It does not depend on the amount of noise of the original ciphertext, c that we decrypted. Because this noise, e, got decrypted away. It got completely cleared by the decryption process. So this allows, by setting the parameters appropriately, it allows to come up with a procedure that takes a ciphertext with a certain amount of noise that can be as large as q over 4. So as long as you can decrypt it, it's fine. And starting from this ciphertext, it computes a ciphertext of the same message, an encryption of the same message, but with an error bound, which is much smaller than that. And once you can do that, then you can perform arbitrary computations because you can perform additions, you can perform multiplications. These operations have the down size of increasing the noise. But when the noise gets too big, then you can perform this homomorphic around decryption operation, which will clear or at least reduce the amount of error and enable further computation. It gives you perfectly composable building block that can be used to evaluate arbitrary arithmetic circuits. And this is the FHSkin. Now notice what we need for this to work. And this is not needed to do everything. So we need the decryption to be exact. And this can be easily achieved by scaling and rounding. And you also need the circular security of edible prime. So the way we performed this decrypt around the procedure was by giving an encryption of the secret key. And in order to perform the nonlinear exact decryption we need an encryption scheme that supports both addition and multiplication, perhaps only for small depth circuits. But it is a scheme that supports not just linear operations. But this is exactly what is given to us by the edible prime scheme. The only problem is that while we started from a scheme E, which was provably circular secure, we turned it into another scheme E prime and then edible prime, which is still secure. We do not know how to argue that the resulting scheme also preserves circular security. So circular security of this resulting scheme is something that still today it is often assumed as a heuristic assumption. And it is still it is the main theoretical open problem associated to fully homomorphic encryption. Now this is not needed if you only want to compute functions of fixed depth. So in that case, rather than encrypting a key under itself, you can take a chain of keys, one for every level of your circuit, of your computation, and then encrypt every key under the previous one. And that allows to get security without circular security of encryption assumptions. So brief summary. So lattice encryption, so we have two types of lattice encryptions, a basic one, which is a circular secure and achieves certain very weak linear homomorphic properties. And we showed how to transform it into a scheme that supports arbitrary operations, addition and multiplication, but only in small quantities, because the error keeps growing and gets bigger and bigger. At the same time, we also have a procedure that is a nonlinear decryption function executed homomorphically that allows to reduce the noise and therefore perform arbitrary polynomial computations, but it requires a circular secure for a scheme for which circular secure is not quite provable based on our current knowledge. So in this last couple of minutes, I want to give you a sketch of how this homomorphic decryption operation works. So this is not the method it is done in practice. This is a terribly inefficient method. This is the method I use to present homomorphic decryption and bootstrapping in my lattice course. And when given as an assignment, students were able to run this for lattices up to dimension five, so to give you an idea of the inefficiency, and then the computer was getting slow. So the decryption operation is the following. So you have these A and B coefficients, and then you have these secret elements, S, SI, so the coefficients of the secret key. For simplicity, assume that these coefficients are binary digits, zeros and ones. So we want to compute an operation that is a scalar product between A and B, A and S essentially, and then also add B to the result. But then at the end of this process, you want to extract the most significant bit of this computation. Now, there is a very simple way to do this, which is just the standard textbook schoolbook addition algorithm that you learned in elementary school, in case you had not already figured it out on your own in kindergarten, which is probably the case for many of you. So you have these numbers that you want to add up. You want to multiply these numbers by S and then add them up. You can write all these numbers in binary. So the columns correspond to different powers of two, and then you start adding these digits using standard addition with carry procedure. So you first multiply your ones by the encryptions of the secret, and then you add up all the elements of the first column, you divide the result by two, and you carry it to the second one. Then you add up all the elements of the second column, you divide the result by two, and you bring the carry to the following column. And you keep doing that until you added up everything. And at this point, if you reduce the result, modulo 2, you have exactly the most significant bit of this number that you computed. So we need to do this homomorphically, and this can be done using some form of a cryptographic accumulator that can be built by putting together many basic ciphertexts. And it should support some local operations, like division by two, mod2 increment. And more critically, it needs to support the multiplication by encrypted elements of the secret key. And local operations can be done using a certain accumulator structure that was first suggested by Alper and Sheriff and Peikert, which encodes a value simply by an indicator vector. It's a collection of encryptions of zeros or ones with one in a specific position. So this allows to perform the computation of arbitrary functions simply by addition by adding the ciphertext along the edges that represent the graph of the function that you wanted to compute. And this allows to perform all the local operations while the accumulation operation. So this can be expressed as you can compute the two possible results. Either you add one to v, or you do not add one to your value v. And then you can use the multiplicative operation, the operation supported by the edible prime scheme, to select one of these two values. So you select either a0 or a1. And that's exactly what we showed can be done using our edible prime scheme. OK, so I'm about to conclude the talk. But before doing that, let me give credit. Most of what I said is not my work. Are ideas that were proposed for various reasons and different applications in various papers. And what I did here was to show how these ideas, some of which even predate fully homomorphic first constructions of FHE, I combined them and used them in a different way to present a possibly hopefully simple description of an FHE scheme. So the idea that lattice cryptography is linear is something that appeared as early as some work of Belare. And me, in 97, where we designed some incremental hash function based on the additive properties of lattices, it is also used in the swift hash function. So the multiplication by powers of two to get a small digit, the composition is something that is present in all the constructions of trapdor lattices, as well as many FHE schemes, including the Brachersk and the Kontanatan. So approximate decryption, which here was a simplified assumption, is something that recently has been suggested by Ceyon et al. as a method to improve the efficiency of homomorphic computation when you can accept the errors as a fact of life. So many practical computations start from noisy data. So if you get a noisy result, that's fine. You don't need to worry about it, so you don't even need to correct the error, you can simply accept it. And the final result of the transformation from E to E double prime, even if it was presented in a very different way, the final result is essentially the same as the FHE scheme, the GSW scheme proposed by Gentry Sahay and Waters, but presenting a completely different language with a different intuition based on approximate eigenvalues and eigenvectors. And the accumulators I already mentioned, the work of Alperin, Scheriff, and Peichert, and similar type of accumulators were also used by my work with Leo Duka on FHE and the more recent work by Kilo Tietol on the TFHE scheme. So the only new thing is a really new technique in this talk is the way we use this accumulator for the bootstrapping procedure. And as I told you, that's not something you want to implement and use. It can be a useful exercise, but it's something that wouldn't lead to attractive results. So concluding remarks, so the basic building blocks of these surprisingly are things that have existed for a long time. So a modulo 2 version of LWA encryption had already been suggested in the early 90s. And from that it is possible, and we discovered this only much later, but it is possible from the scheme, but done modulo q, to build encryption schemes in a black box way that support addition and multiplication for arbitrary small depth circuits. And interestingly, the fact that you could perform cryptographic computations on encrypted data for log depth circuits, this also was something that was already known in the late 90s in the work of Sanders, young and young, that designed an encryption scheme where at each level of the computation, the size of the ciphertext was doubling. So of course, if you go beyond log n levels, ciphertext become super polynomial in size, but theoretically they give a method to perform log depth computation on encrypted data. So if you combine this with bootstrapping, which is the main new ingredient that was discovered 10 years ago, you can get arbitrary computations and FHG. So this bootstrapping is still the main theoretical problem and also the main practical problem. Theoretical problem is the circular security of edible prime. We don't know how to prove it based on standard LWE. And on the efficiency side, bootstrapping is the main bottleneck in homomorphic computations. So you may also wonder if this type of, this perspective can also be useful for other applications. And of course it is, and many of these are already there. Some of them I heard about them yesterday here at EuroCrypt. So these linearity properties and how they can be used to achieve interesting things were used for example in building transformations between different types of FHG schemes or also to bridge NPC and FHG by translating. So homomorphic encryption is linear. Many multi-party computation schemes are also based on linear secret sharing, so they are also linear. And you can somehow use these profitably to move things from one side to the other. So this, some form of linearity is also used in the construction of homomorphic commitments and fully homomorphic signatures. So yesterday Boil told they presented a work where they used the linear properties of secret sharing in a way that is very much like boost trapping of encryption but in a more efficient way. And there was also another interesting talk by Alamadi et al where they showed how symmetric cryptographic primitives with algebraic structures can be used generically to build more powerful things, which is something very similar in spirit to what I talked about today in the specific contest of FHG. And so this concludes my talk. Most of the citations were abbreviated, so this is a list with the full names of the people that contributed to this work spelled out in full. So thank you for your attention and I'll be happy. OK, thanks then, Iles. So any questions? So, OK, yeah, Zvika? OK, so thanks for the talk. I'm wondering, does this abstraction have any sort of limitations or you could have kept going and maybe even constructed trap doors through your framework with Chris and maybe constructed like things like ABE and then the other things that we know how to do from learning with errors. Is there a gap between what you can do from this abstraction that you presented and what you can do from sort of bare bones LWE? So, as far as I know, for what I described today, yes, so you need other things. And I think the talk yesterday about the cryptography, symmetric crypt with algebraic structure, it seems to open up other possibilities. I don't know how much you can do, so there is certainly space for further extensions and improvements. I really don't know what the boundaries are of this type of approach, but it is certainly worth exploring. So, I am not advocating not to use lattices, so in the end, yeah, and in fact, using lattices explicitly may be the best way to come up with new ideas and constructions. But even just sort of saying that you have a secure encryption scheme with linear decryption and noisy sort of noisy output for the decryption, that's kind of already LWE, right? That sort of gives you the hardness of an LWE type problem, maybe not the pseudorandomness, but so, I don't know, I think maybe you can do even trap doors with your abstraction. Yeah, okay. Thanks. Okay, any other questions? When we construct multiple ciphertexts for multiple messages to upload to the cloud, do we allow somehow to coordinate the noise or the noise has to be really independent and fresh for every encryption? Because that maybe could be a solution, too. Yeah, so here, of course, I was assuming that encryptions are done using fresh independent noise, which is where security is guaranteed. Now, security may as well hold when there are correlations between the noise vectors and the correlation interactability results from lattice problems suggest that this is quite possible. I was not using it today, but it is something that could both be useful and also potentially also secure. And also when you're building public encryption from symmetric encryption, it looks like the public key is much bigger than the... Yeah, it gets bigger. So this construction comes with a price inefficiency. Was that the question? Yes, so that's inherent price we have to pay or are there alternative constructions? Okay, so what you get as a result is essentially the same as reg of public key encryption scheme. So what was built as a lattice-based public key encryption scheme from lattices is not more efficient than the modular construction using linearity property. If there are differences, they are minimal. So there is a price in going from secret key to public key, but that's essentially the same as what you pay when building public key encryption directly from lattice problems. Okay, I guess we can take more questions flying and let's thank Daniele again.