 computers. She described one possible post-quantum method for cryptography that was based on using paths in isogenic graphs of elliptic curves over finite fields, which is quite a hot new topic. But today I want to discuss another source of quantum hard problems based on lattices and that have been proposed for use in post-quantum cryptographic systems. I'll try to keep this talk mostly self-contained while at the same time not repeating too much of what Kristen said, but I probably won't quite succeed in either of those, but I'll try. So, start up. First, a quick run-through about cryptography. In the Dark Ages, which is what cryptographers call the pre-1970s, the basic problem is the two protagonists, Bob and Alice, wanted to exchange secret information, so say Bob wants to send a message to Alice that their adversary Eve, the eavesdropper, cannot read. So, this is what they do. First, Bob and Alice share some secret key. Then Bob uses the secret key to encrypt his message. Bob sends the encrypted message to Alice, who decrypts it also using the secret key. And Eve, even though she can intercept the encrypted message, can't read it because she doesn't know the secret key. I'll just mention that in cryptography talks, people tend to put things in red if they're secret and in green if they're public knowledge. So, that's what the color encoding there is. It's to kind of help keep track of what's what. The problem with this is that Bob and Alice can't exchange messages until they exchange the secret key. So, they have to meet up in some way or other, or they have to send the secret key by some channel that potentially could be intercepted. And this is how cryptography worked from, who knows how long ago, up until the 1970s. And there's the problem that they need to exchange or share a secret key before they can get going. So, what happens if they've never met and they have no secure communication channel? And this became especially relevant once the internet existed. So, for example, if Alice's Amazon and the message is Bob's credit card number, they've never previously communicated, but he wants to send this credit card number so that he can buy stuff. Okay. So, in the mid-1970s, Diffie and Hellman proposed creating cryptosystems that used two different keys, a private key that Alice keeps secret and a public key that she publishes. And the idea is that Bob only needs the public key to encrypt his message and create the encrypted message. Okay. So, the cryptographic terms for the message is usually called the plain text and the encrypted message is the cipher text. Alice uses the private key to decrypt the cipher text and recover the message. But Eve can't do that because she doesn't know the private key. She only knows the public key. And in the interest of historical accuracy, I should mention that the British Secret Service kind of worked this out also, but they didn't publish any of it until much, much later. Anyway, this is a really fascinating idea. Diffie and Hellman weren't actually able to propose an explicit example of this, but they became known as public key cryptosystems because of the fact that there's this public key that's used for encryption. But the public key doesn't allow you to decrypt. All right. Let's formulate this mathematically since we're all mathematicians. Encryption and decryption are actually functions. Okay. So, there's an encryption function, which I've represented by this black box. The input is Bob's plain text, his message, and Alice's public key. The output is Bob's cipher text. Alice then feeds her private key into the decryption function and outcomes Bob's plain text. But in order for the decryption function to work, you need the private key that pairs with the public key. That's the idea. Eve knows the public key, but she doesn't know the private key, so she can't decrypt. The pair of functions encrypt-decrypt is an example of a trapdoor function and its inverse, which means it's a function that's easy to compute, so easy to compute encryption, hard to compute the inverse of encryption to go from the cipher text to the plain text, unless you know this extra piece of information, the trapdoor, which enables you to compute the inverse. All right. That's how to send messages. Another side of public key cryptography, which is really important is digital signatures. So, I want to just quickly mention how those work also. Here, Alice has a document that she wants to sign. It could be a contract. A publisher might want to digitally sign a book, or I'll get to another application later. Instead of encrypt and decrypt functions, there's a sign function and a verify function. And what she does is she takes her document, which is public, and she also feeds in her private signing key, runs them through the function, and gets out her signed document. Now what happens? Well, Bob or Eve, or anyone for that matter, takes the signed document, takes Alice's public verification key, runs them through the verified function, and gets either yes, the signature's valid, or no, the signature's not valid. And if the public key and the private key are matching pair, then the answer should be yes. Digital signatures are at least as important as public key crypto systems for modern communications. A typical example, I mean, your cell phone is constantly updating the apps. And how does it know that an app update is legitimate? And it really does need to know that, because there are lots of players out there who would love to install software on your phone. The answer is that when the app is created, it's digitally signed by the manufacturer, and your phone verifies the signature of the update before it installs it. Okay, how does one build a trapdoor function to use for public key crypto systems or digital signature schemes? So cryptography tends to have a lot of abbreviations. I think computer science probably does too in general, but so PKC is a public key crypto system, DSS is a digital signature scheme. Anyway, we want to build these from hard math problems. Here are three examples that have been used quite a lot. There's the integer factorization problem, which is given to prime numbers, if I tell you their product, PQ, it's hard for you to find P and Q. There are various ways to exploit this. One of the most common and the first one was using the powering function. So you take a number and raise it to the ETH power where E is known and P times Q is known. And it's easy to compute this powering function, but it's actually quite hard to invert it unless you know P and Q. With that conjecturally, it's equally hard. In other words, to invert, you really do need to know P and Q. So the integer factorization problem in this exponentiation form was used by Rabess, Shamir and Edelman to create the RSA crypto system and also a digital signature scheme, which works similarly to the public key crypto system. So the public key is the product PQ and the exponent, the private key is the individual price. And probably everyone's already seen this. I'm not going to go over how these actually, the crypto systems work. That's another talk. A second problem that was proposed is the discrete log problem. And in fact, Diffie and Helman used this in this paper to create what's called a key exchange, which is a slightly weaker than a public key crypto system. Anyways, here's the problem. Take a large prime or prime power. Take, say, a generator or an element of large order in the multiplicative group of a finite field and look at the powering function, which takes a number M and just raises G to the M for power. Again, easy to raise things to powers. Hard, if I give you G to the M, it's hard to find M. That's M is the discrete logarithm. It's a version of the logarithm in this group. The discrete logs used to build what's called the El Gamal public key crypto system and digital signature schemes. The public key is the prime, well, the field, the generator, and some particular power. And the private key is the power. There is a similarly a discrete log problem for any algebraic group. You can do it for the additive group where it's really easy to solve. People also use it for elliptic curves. So if we replace the multiplicative group with the points on an elliptic curve over a finite field, you have the same kind of problem. If I give you the point Q and M times Q, you have to find M. So that's the elliptic curve discrete log problem or ECDLP. So ECDLP is widely used for an elliptic curve version of El Gamal. Why bother using elliptic curves when the discrete log itself is much easier? I mean, multiplying numbers in a finite field is easier than adding points on an elliptic curve. At least the formulas are easier. Well, the answer is that the elliptic curve discrete log problem is at least as far as we know at the moment harder than the other two problems. What that means is that the keys and the ciphertexts can be smaller. And this is really the big battle in all of cryptography. You want to be really, really, really efficient, but you also don't want to be insecure. So you play these two things off against each other. And there are countless examples of cryptosystems which are secure, but they're just really not efficient enough to be useful. So people tweak them to make them more efficient. And every time you tweak them, you wreck the security. It's very, very delicate, usually. But anyway, this is why people use elliptic curves. And here's a real world example for those who like to speculate in Bitcoin. A blockchain such as Bitcoin stores millions of digital signatures in the chain. And to save on space and save on bandwidth because the blockchain has to be transmitted all over, elliptic curve digital signatures are used. That was what was in the original proposal and that's what's still used today. Okay. So how hard are these problems? Well, let's be honest. No one knows. As far as we know, these problems are, well, they could even be quasi linear or conceivable or say quadratic time. They could be really, really easy. Just no one's figured out how to solve them. So we don't have a proof. So the proof, if you like, that they're hard is that no one knows a good way to solve them. The practical answer for modern cryptography is how hard are they to solve using existing algorithms, keeping in mind someone might invent a better algorithm tomorrow. And that's happened before. For example, for factorization, new sieve methods were created. So anyway, so here are the three problems. Here's roughly how many steps it takes to solve them, very roughly. You don't really need to parse what those mean, except that the first two are smaller than the third one. And that means it's a practical matter. If you're using integer factorization or discrete logs, your ciphertext and keys will be in the, you know, a few thousand bits. Well, for elliptic curves, there can be a few hundred bits. Is that a big difference? Well, even 4,000 bits isn't that much. But there's this caveat, which you probably noticed existing algorithms on existing computers. What about computers that don't yet exist? So that's where quantum computers come in. I know very little about quantum algorithms. So I'll just say a word or two that a quantum computer is a machine in which computation on bits is replaced by computation on quantum bits, which are essentially probability distributions. So a quantum computer that has nq bits can do a simultaneous computation of two to the n states. So essentially, well, you can see the numbers there. You get an exponential speedup, assuming you can figure out an algorithm that will operate on a quantum computer. The largest quantum computers that have currently been built have a handful of qubits. I used to when I did talks about this, putting, you know, four qubits or eight qubits, but it keeps increasing. So I'll just say a handful. It's certainly more than eight, and it's certainly less than 100 at the moment. But there's huge amounts of money being invested in this to build quantum computers. For a while there, a good way to get grants was to put quantum computer and quantum cryptography into your grand proposal, but they've wised up a bit on that. You actually have to have ideas. Anyway, so I kind of like this analogy. Let's look at flight. So the first airplane, Howard airplane flight was in 1903. It went 852 feet. 15 years later, there were aircraft all over the battlefields of World War I. And 20 years after that, we had jets going 500 miles an hour. So I mean, yes, building airplanes maybe is easier than building quantum computers, though maybe not. I mean, they're pretty clever engineers. But if you want to predict what's going to happen in 10 or 15 years, tricky, if you want to predict what's going to happen in 40 years, that's really, really hard. So where did all of this start as far as I know is this paper of Peter Shores in 1994 called polynomial time algorithms for prime factorization and discrete logarithms on a quantum computer. And he gave algorithms that would run on a quantum computer that would solve the integer, that would solve factorization of integers and discrete logs in polynomial time, in fact, more or less in quadratic time, they're really fast algorithms. And although I don't think the original paper mentioned it, you can adapt the discrete log thing to do elliptic curve discrete logs too. So a full scale quantum computer will pretty much kill all the classical public key cryptography that I mentioned on previous slides. So people started looking for computer systems that couldn't be broken by a quantum computer. What do I mean by couldn't be broken? I mean, as far as we know, no one has created, no one has figured out an algorithm on a quantum computer to break them. Will someone find such algorithms tomorrow? Who knows? But that's about the best we can do at the moment. So the field's currently called post-quantum cryptography, post-quantum really meaning post-quantum computers, when quantum computers are built. So here's a little bit of what's happening in the real world. Kristen already mentioned some of this. In 2016, the NIST, the U.S. National Institute of Standards and Technology, which has been running cryptography competitions to pick crypto systems for use, public things that I mean, they're really good. They issued a report on post-quantum cryptography. So this was their best estimate, meaning some expert panel, that they were guessing a quantum computer capable of breaking 2000-bit RSA, which is fairly standard at the moment, to break it into a matter of hours could be built in 2030. So at that point, it was 15 years, now 10 years, for a budget of about a billion dollars. So start saving your pennies and in 10 years you'll be able to buy your snow. But a billion dollars for a government is really not very much money. And in fact, a billion dollars for a criminal enterprise, that is a lot of money. But if you could use it to strip the full worth of all the Bitcoin wallets in existence, it would be money well spent. So anyway, so NIST instituted a competition to create digital signatures, digital signature schemes and public key encryption schemes that would be safe, including after the advent of quantum computers. So how did they do that? Well, they solicited proposals, they got, I meant to look it up, at 40 or 50 or 60 proposals at least. And there was a public comment period and a lot of people weighed in and then they eliminated some of them and there were more comment periods and revisions and things. So actually I just got to update this. So last week they got down to their round three candidates. And the round three candidates include four public key crypto systems and three digital signature schemes that are under primary consideration. And there were another about maybe six of each that are under secondary consideration still. So that brings me to the mathematics part of the talk, which was about lattices. So for the rest of the talk, I want to talk about a little bit about lattices and hard lattice problems and then explain how they can be used to create public key crypto systems or digital signatures. And then at the end, I'll talk about a particular system that actually was invented quite a few years ago, but some of us recently have been working with to use it for some particular applications, which I will get to at the end if I have time, which I might actually have time for. I hope I didn't zip through that too quickly, but I wanted to get the math part. Okay, so public key crypto and hard lattice problems. In the mid 1990s, right around the time that Shor's paper appeared, a bunch of cryptographers and mathematicians were looking at hard lattice problems to use them for cryptography. This wasn't because of quantum computers, which no one at the time I think really had on their radar screens, but they were doing that for the following reason. First, I tied into work in this beautiful paper, created a public key crypto system whose sort of worst case security, I'm sorry, whose average case security was equal to its worst case security. Right, so just think back to RSA with integer factorization. Even if we could prove that it, there are products of primes that are hard to factor. That doesn't mean that the two particular primes you chose are hard to factor their product. So the results like this are really nice because they say that if you pick a random instance of your problem, there's a pretty good chance that it is as hard as the hardest case of your problem. Now the crypto system they created with it unfortunately really wasn't practical. I mean it had multi thousands of bits for the messages to send one bit of information or something, but it was really interesting. The other reason people looked at hard lattice problems was the hope that one could create encryption schemes and digital signatures that are much, much faster than the classical schemes. You lift the curve schemes, the RSA type schemes. I mean those schemes are relatively fast. There's no problem on a processor like on your phone, but for example, if you wanted your, the chip on your credit card to do digital signature processing, there's not much processing power there. So Jeff Hofstein, Jill Piper and I started working on this in the mid 1990s and Goldwright, Goldwasser, and Halevri also independently were working on it. Neither of us knew the others were doing it at the time, but later interest in these systems really blossomed because it was pointed out that in fact there are no quantum algorithms that will solve these hard lattice problems in polynomial or even sub exponential time. There's a bit of a speed up just because there's a quantum search algorithm that sort of gives a square root improvement, but that's, that's the best known. Okay, so what's a lattice? A lattice of dimension n is just the z linear span of n independent vectors in Rn. So it's just, we pick v1, v2, vn to be a basis of n space and we just take all the linear combinations but with integer coefficients. The vectors v1 through vn are a basis for l, but of course a lattice will have lots of bases. Just like a, well I mean a vector space has lots of bases. In lattices it's not, there's not quite as much flexibility but there's still lots of different bases and some bases are better than others which I'll explain later what I mean by that. So a useful concept is if you're given a basis then the fundamental domain for the lattice is the parallel pipette, so a higher dimensional parallelogram where you just take the linear combinations of the basis vectors with t running from 0 to 1. So that's lots of definitions but a picture makes it really clear. Here's an example, a two-dimensional example. The dots are really of course vectors in the lattice. These two vectors are a basis and here's the fundamental domain span by that basis. So in lattices there are two really, well there, I mean there are lots of problems one can formulate but two really, really important computational problems which had been studied long before any of this crypto stuff came about are the shortest vector problem and the closest vector problem. Again with the abbreviations, SVP, CVP. The shortest vector problem it's exactly what it sounds like, find the shortest vector in the lattice, well the shortest non-zero vector, I write it, the lattice is a group it always contains zero. A closest vector problem is given a vector that's not in the lattice, find the lattice, well find the lattice vector that's closest. Now both of these have approximate versions too which is what's used in cryptography and actually in many applications, rather than finding the shortest non-zero vector it's often enough to find a vector that's quite short or find a vector that's quite close to t even if it's not the absolutely closest vector. Okay so in practice these two problems are more or less of equal difficulty. I think there's actually a proof one direction and the plausibility argument the other but anyway in practice solving the closest vector problem in dimension n you can usually set it up as a shortest vector problem in dimension n plus one so more or less the same so I'm going to ignore this distinction. So how does one try to solve the closest vector problem? Here's an idea. So these two vectors are my basis vectors okay so someone gives me this target vector and I'm trying to find a close vector in the lattice so what do I do? Well I take the fundamental domain that I you know this is the standard fundamental domain and I translate it so that the target point is in the translated fundamental domain that's actually easy to do in fact I'll say it in words mathematically just express t as a real linear combination of the basis uh-oh are you hearing this no we are not hearing that's fine we're not hearing you but nothing okay hold on one second I I'm hearing feedback I'm going to turn off my headphones I do hear you I can't hear you so we hear you well Joe well again here are you uh for some reason my computer my um headphones seem to have died um can you hear me no we hear you yes we hear you okay no I'm using the computer's um microphone now and I'll speak louder okay good I hear you even louder now so are you louder okay so I'll be quieter no no no okay anyway if I'm not loud enough tell me um anyway you just take the vector t and write it as a real linear combination of the basis vectors and take the greatest integer of each coefficient and you'll get this fundamental domain that contains the target vector and then you just take the vertex in that fundamental domain that's closest and there's your candidate and you can see I did I found the closest lattice point to the target vector but I was kind of the reason that works is because this basis is reasonably orthogonal so that's what I mean by a good basis a basis that's fairly short and reasonably orthogonal so let me give you an example of a bad basis so the red basis here is a good basis you can see it's fairly orthogonal these two green vectors span the same lattice but they're not orthogonal at all now look what happens if I try to solve the closest vector problem using the bad basis there's a target point I just kind of chose it random this is the fundamental domain there's this long skinny thing that contains the target point so I choose the vertex of the fundamental domain that's closest there it is not so bad except that is not the closest lattice point right the closest lattice points here the red point okay if you've never really worked with this stuff I can see people rolling their eyes and if someone gives you the target point you could just eyeball where the closest lattice point is okay and that's true because although these pictures are great two dimensions is too small to get an idea of how hard these things are in two dimensions these problems are very very easy and in fact I don't know whether he's the first one but but but Gauss had a very efficient very efficient algorithm for solving the closest vector shortest vector problems in two dimensions and actually it's pretty easy in dimensions three four five ten whatever um and during in the early 1980s uh Lester Lester and Lovash came up with a lattice reduction algorithm now called the L cubed algorithm um it's actually pretty efficient even in higher dimensions um and their algorithm has been extended and um by by a whole bunch of people that there's a lot of research and ongoing research um on lattice reduction algorithms and L cubed finds a pretty good basis in polynomial time and that's suffices for many applications well polynomial time in sort of the coefficients of the basis vector um yeah okay no I'm sorry polynomial in the dimension of the lattice but if you want to find a really good basis even using lattice reduction algorithms like L cubed um the problem still seems to be exponentially hard exponential in the dimension so dimension 10 not so bad dimension 50 getting harder dimension 10 000 no way um and the really important thing for all of this in terms of quantum crypto is there are no known quantum algorithms to solve shortest vector or closest vector problems in polynomial or even sub exponential time even if you have a working quantum computer with lots of quantum bits and lots of quantum storage okay so I thought I'd explain how one can create a digital signature scheme using the closest vector problem this one's not actually efficient enough to be useful but it was um one of the earliest ones that was proposed by gold reichler wasser 11 so there's the good basis that's Alice's private key here's her good basis and here's the bad basis that's her public key so Alice publish it publishes these two vectors now she wants to sign a document technically she runs her document and her public key through what's called a hash function anyway it's used to create a random vector that is tied to her document okay and probably tied to her her public key also anyway so so there's this random vector that's not a lattice vector now Alice knows this good basis right so she can use that procedure I had on a couple slides ago just by taking the fundamental domain for this basis translating it until the target points in it and then picking the closest closest vertex in that fundamental domain so she can find a nearby vector it may well actually be the closest lattice vector in any case it's very close because she has this good basis she can use notice that when Alice publishes this vector the way she publishes it is she publishes it as multiples as a linear combination of the bad basis so she uses her good basis to find the signature vector but she publishes it as a linear combination of the bad basis that means Bob or anyone who wants to verify the signature well they can check that the signature vector the lattice vectors you public is close to the document vector that's just checking distances and they can check that the signature vector is actually in Alice's lattice because they can check that it's in the lattice generated by the bad basis right if I give you two vectors and I give you a third vector and ask is it an integer linear combination of the first two vectors well that's easy you just write it as a real linear combination and check if the real coefficients are integers okay so the ggh digital signature scheme versus l cubed the security comes down to how good l cubed is it solving this closest vector problem which basically comes down to how good is l cubed it finding a pretty good basis and the answer is in dimensions less than less than 100 versions of l cubed finds a good enough basis to break ggh and even up to around 200 you would probably break it but ggh public key remember is a basis for the lattice so that's n vectors each of which has n coordinates so that means the public key has about n squared bits n square well big O of n squared and that gets pretty big if n is you know like 500 so if you take a value for n that makes ggh secure the keys are likely to be on the order of two megabits or so you remember how big the elliptic curve keys were they were like 300 or 400 bits so this is many orders of magnitude larger okay so independently of the work that the ggh were doing and more or less the same time jeff hoffstein jill pipher and i were working a public key crypto system called n trip and i really actually i really want to stress this was really jeff hoffstein's idea and he recruited jill and me to help him develop it and so i mean we did develop it as a team but the initial idea was really jeff's um and n true is also a lattice based system but it uses us what i'd call a cyclotonic lattice which means that the basis vectors are um well actually they're just barrel shifts the the coordinates are just barrel shifts of one another so basically you can specify the entire lattice just by giving one vector and use it to generate the other basis vectors so this added symmetry allowed us to get down to keys that were roughly rsa size not all the way down to elliptic curve size that would be the gold gold standard but um you know two to four thousand bits or eight thousand bit keys say there are now lots and lots of cryptographic constructions based on hard lattice problems and most of the round three proposals nist round three proposals in fact are lattice based so in the remaining um ten minutes or so i'd like to describe an old lattice based system which actually me the original sort of prototype for it actually predates n true by a few years even but it's been modified and changed over the years in various ways um and the reason i thought i'd describe it is two-fold first it's actually based on some kind of cool mathematics um and we i'll mention who um recently figured out how to use it to do signature malachymation and i'll mention at the end why that's an interesting thing to do so a prototype of of pass as in the little note at the bottom dates back to 1997 and many people have worked on developing it and doing things with it i list a bunch of the names down there um and i apologize if i've left anyone out so although although pass is a ultimately a lattice based system in the sense the security depends on the difficulty of a hard lattice problem the way it's really constructed uses the discrete Fourier transform which is the cool math at least i think it's cool so how do Fourier transforms work well you start with a finite field pick your favorite finite field uh take a primitive nth root of unity in that field where so n would be some divisor of q minus one and just for now we're sort of filing the back of your mind a subset t of the numbers from zero to n minus one over roughly half the numbers from zero to n minus one okay okay what is the discrete Fourier transform it's a map from the n-dimensional vector space over fq the n-dimensional vector space over q fq it sends the vector ai so that's a one through a n to the vector ai hat where ai hat the coordinates of the Fourier transform are just obtained by taking the coefficients of the original vector basically using them to form a polynomial and plugging in values of z powers of zeta one zeta zeta squared zeta cube so the original formulation of pass actually was in terms of polynomial evaluation that's what the p stands for um well you can think of this as multiplying the original vector by this vandermon matrix that has powers of zeta in it anyway that's the discrete Fourier transform um for cryptographic purposes the n's that we use are not that big they're in the thousands so i mean computing this is not that time consuming anyway but of course as people probably know they're they're fast Fourier transform methods so in principle there are n coordinates here each of which takes time n to compute so that's o of n squared but in fact you can do the computation in time o of n log n using uh fast Fourier transform using fast Fourier methods okay and then here's these things are easy to compute and fast and in fact it's invertible that this is a bijection uh as well as long as um yeah and ends relatively fine to keep it's a bijection and i may be cheating a little bit here but essentially it gives a ring homomorphism and it's a really cool ring homomorphism because what we do on the left hand vector space we use convolution product um which if you know convolution product um fine if you don't basically think of the vectors here is actually just being coefficients of a polynomial then multiply the two polynomials and set um x to q minus x set x to the n equal to one so that's uh some sort of multiplication on this side and on the other side we take multiplication where we just multiply component by component okay so the multiplication the convolution multiplication is a bit more complicated than the component-wise multiplication anyway this is all really standard um classical mathematics so the problem that i want to use is what's known as the partial um Fourier transform problem so suppose instead of giving you the entire list of uh Fourier coefficients i just pick out some of them using this set t of indices okay so i'll let a have some t just be the Fourier coefficients for the indices in t so i'm going to give you half the Fourier coefficients say and my challenge to you is find the vector i started with well you can't do that in general because i could have started with lots of vectors remember i said this map is one to one but if i take if i only give you some of the coordinates here there'll be lots of coordinates to get mapped to it right i mean the kernel of the linear transformation will be big but i'm also going to give you extra information i'm going to tell you the initial vector i started with had very small coordinates compared to q the you know the prime i'm working in fact typically one might take the initial data this vector q'd have all of its coefficients to be zero one or minus one okay but the actual value of this vector is secret but now there's probably only one vector say with coefficients zero one and minus one that has these Fourier coefficients even though i've only given you half the Fourier coefficients and that's the challenge problem that's the hard problem given partial Fourier data data of a small vector recover the vector so why is this a lattice problem well let me translate it into a lattice problem for you if i take the Fourier transform that's this ring homomorphism and i project onto the t coordinates specified by the indices in t then if i start basically i'm starting with a small vector in z to the n i'm giving you its image here and i'm asking you to invert that well what you do is you look at the kernel of this map phi that'll be a sub lattice of z to the n so here's how you can solve the problem at the top of this slide first i'm giving you a point here this is just a linear transformation pick any vector in the domain that goes to the target vector okay that's just a linear algebra problem basically a linear algebra problem over fq it won't be the right answer because the vector you get will have large coordinates but it's a candidate so you pick a vector which has the right image okay um then that's a vector in this well now you find a vector in this kernel lattice that's close to alpha that's a hard problem that's the closest vector problem but if you can solve the closest vector problem um then then you can do that so if you just parameters this will be a hard closest vector problem um in general and then in fact the target vector my secret vector will simply be this arbitrary um vector that goes to the right place minus the vector in the kernel that gets you a small vector okay so that's how solving the closest vector problem will solve this partial um discrete Fourier inversion problem oh good okay so i have like two more slides i think maybe three um so here's how to use that to create a digital signature ski alice's private signing key is just some short vectors say with all ones minus ones and zeros she publishes its Fourier transform but only is part of the Fourier transform if she wants to sign a document she chooses a random vector completely random she uses the partial Fourier transform of that random vector and her document to create a random vector in um fq to the n this gamma ties her will tie her signature to the document she computes her secret short vector convolution product this this vector that um she created with the hash so that should be read actually plus the random vector and her signature is the partial Fourier transform of the random vector and um the signature vector how does bob verify the signature well first he knows this partial Fourier transform of the random vector he knows the document so he can do the hash he can recreate gamma well since he knows all of gamma he can compute its Fourier transform he does a computation in the ring where you're just doing coordinate wise multiplication to compute and and he checks that that matches up with the partial Fourier transform of alice's signature he knows the whole signature vector so he can compute this and also he checks that that signature vector was short okay so i this is a lot to take in if you haven't seen it before but i just wanted to indicate it's really just very straightforward computations in these two rings one where you're doing convolution product one where you're doing coordinate wise product and taking these Fourier transforms which is just a linear transformation okay so the classical schemes based on integer factorization or discrete logs will be killed off by quantum computers the digital signature schemes in the NIST competition in around three NIST competition their signature sizes are at least comparable to rsa they're in the thousands of bits not to this gold standard where where the keys and and messages are in the few hundreds of bits and the signatures are in the hundreds of bits um is that important well depends on the application for your computer for your cell phone 4 000 bit keys are fine but for blockchains where you're storing and transmitting millions of signatures this extra order of magnitude can make a difference um so jargon doors jeff and berksunar and i um have been working in some stuff we created a variant of pass that allows you to amalgamate lots and lots of signatures into one signature that's much smaller than storing all the signatures together so one signature is well i wrote 2 000 bits or 4 000 bit anyway it's it's a standard sort of rsa size signature okay but if you want to amalgamate n-signature oops n signatures each additional signature it only takes you know a few hundred bits so roughly elliptic curve size signatures so basically you can store a million signatures in about the same size using this variant of pass as you could using the elliptic curve systems why bother well because at least potentially these lattice based systems are secure against quantum computers while the elliptic curve systems aren't so in conclusion i want to thank everyone for being here but this is my current mantra quantum computers are coming for you everyone should start preparing now in particular if you have something you want to keep secret for a year don't worry about it too much 10 years you probably do want to worry 50 years i would not for example sign a contract that you want to be secure for 50 years using any of the current systems without also adding a quantum secure signature you can do both and that's what people should really be doing at this point they should you know put rsa signatures or elliptic curve signatures and one of these quantum secure signatures in at the same time anyway so finally thank you very much for coming and i want to thank the organizers for inviting me to give the seminar many of which i've attended these have been really great so i want to also thank all the people who have spoken at the ntw seminar so far it's been really wonderful so thank you very much thank you so much for your wonderful talk let us please unmute such that we can talk