 The TA for this course is Yana Sotakova, and she'll be doing the problem sessions. So I wanna thank Yana to start with for her help with putting together this course. So I'm not gonna go in detail over the outline right now, but just so you know, the goal of my first lecture is to really explain all the words in the title. So super singular, isogenic graphs in cryptography. So let's just start with cryptography. I heard that Joe Silverman gave a nice lecture last week about cryptography as well, and some of you may have already some familiarity, but just from a very high level, very briefly, I'd like to say something about cryptography. So it's the science of keeping secrets, but more than that, so we often, most people probably know about encryption. People don't often think about it as authenticity. So cryptography is also used as a tool for guaranteeing authenticity. So one way to think about the importance of authenticity is that if you are connecting your web browser through a secure connection to buy something, let's say, from Amazon, and the secure connection is supposed to protect the confidentiality of your credit card number, that doesn't do you a lot of good if you're actually connected, instead of being connected to Amazon, if you're connected to some other party, and you're giving them your credit card number and your money instead of the party that you're actually trying to buy something from. So authenticity is very important as well, and the tools that we use for protecting authenticity are called digital signatures, and sorry, there was a little lag in advancing the slides, and the technology that we use for creating, for example, secure browser connections is key exchange. So I generally think of the three main building blocks in public key cryptography being key exchange signatures and encryption, and one way that you can see that this is roughly a good way to think about cryptography is that over the last five years, there's been an international competition called the Run by NIST, which is the National Institute of Standards and Technology, and they've been run for two parties to agree on a common secret using only publicly exchanged information, whereas signature schemes is some mechanism to allow parties to authenticate themselves. So one thing, and when we say public key cryptography, we're generally referring to the idea that there's an asymmetry, you also hear it called asymmetric cryptography, and that is that there could be some public key that everyone knows or that you publish, and then there's some secret key that corresponds to that public key, and so if you are the authentic user, for example, and you're trying to demonstrate your authenticity, you would use your own secret key to sign something, digital signature scheme, and then anyone could actually, using your public key, verify that signature. So one thing I think is interesting to point out, just from a societal point of view, is that this type of scheme requires reliable, right? It doesn't do you any good to verify somebody's signature with their public key if you have the wrong public key for them. So you can think of what we call PKI, public key infrastructure, as being a kind of a public phone book that actually is a trustworthy log of people's record of people's public keys. And so it's an interesting societal problem to even think about who and how to maintain a public key infrastructure. So I'm not all that up to date on everything that's going on in the world, but it used to be about five years or so ago that the only government that I knew of that actually maintained a public key infrastructure was South Korea. So in the US, we rely on private companies to maintain public key infrastructure to allow us to securely authenticate ourselves. So if any of you have had the experience of traveling with your laptop abroad and then all of a sudden you'll get these messages that say, hey, the certificate for this website or the certificate for this connection is not recognized. And that's because we don't have a global public key infrastructure. Okay, so now to get back a little bit to connect to the mathematics of public key cryptography, examples of public key cryptosystem that are deployed today are RSA, for example, which is based on the hardness of factoring large integers. Diffie-Hellman, or actually the discrete log problem, which is related to Diffie-Hellman. Unfortunately, I don't have time to go into all the details about these systems, but that's based on the hardness of the discrete logarithm problem, either in the multiplicative group of integers modulo p, some prime, or in the group of points on an elliptic curve. So ECDLP or ECDH are the discrete logarithm, or the elliptic curve discrete logarithm versions of the Diffie-Hellman and DLP problems. And then digital signatures, DSA also has an elliptic curve version of this, ECDSA. And typically, just as a little side note, the way things work is that, so how do we get to the point that we're all using RSA or elliptic curve cryptos? So there's actually a standardization process, which earlier in my career, I worked a lot on standardization of elliptic curve cryptography, where experts basically researchers like the people in this room get together and determine the security of the elliptic curve systems, like how hard they think these systems are to break. And then the government agencies, such as NIST and other agencies, and also professional societies, such as IEEE, create standards based on the researchers, what the researchers have determined. And then finally, the last stage is that there's industry-specific standards that are created for the use of those things. So for example, for elliptic curve cryptography, there's X962 and X963 are the ANSI standards, ANSI is American National Standards Institute, that created secure standards for using elliptic curve, both for digital signatures and for a key exchange, surprise, surprise. So that's how these systems actually come into widespread use. And over the last five years, I've been working intensely on standardization for homomorphic encryption, which is a different type of encryption technology based on lattices, which I'm not going to talk about today, but I'd be happy to talk with anyone who's interested on those things. So I've mentioned already a few of the important applications for deploying cryptography in our daily lives, so secure browser sessions based on the SSL or TLS standard, signed encrypted email, which almost no one uses, based on the SMIME standard, virtual private networking, which allows you to securely connect remotely, for example, to a corporate network using IPsec. And all of these things require authentication through certificates, for example, X509 certificates. And these are just examples of some of the standards. This is not an exhaustive list. There's a whole zoo of standards. Okay, so let me talk a little bit about elliptic curve cryptography, because it's kind of a precursor for talking about isogenic-based cryptography or a super singular isogenic graphs in cryptography, which is my title. So elliptic curve cryptography, you start from an elliptic curve, so I know many of you are probably very familiar with elliptic curves. And in this set of lectures, I'm not gonna go into rigorous definitions of what is a curve and all of those things. I'm just going to tell you what an elliptic curve is from a cryptographer's point of view, which is not my background, but it's the environment that I work in. So I'm very familiar with people that think of elliptic curves this way. So if you start with a very large prime, so when I say of cryptographic size for elliptic curve cryptography, that typically means at least 256 bits. So in cryptography, you just have to get used to all the numbers in sight being extremely large. Meaning, so if the prime is roughly size two to the 256, it takes you 256 bits to write it down. So that's the amount of space that it takes you to actually represent this number. So that's an important thing to think about how much space does it take. And then another thing you want to always be thinking about is how much time does it take you to compute on these numbers? And also even how do you compute? So on a computer, when I started working at Microsoft about 23 years ago, we were mostly using 32-bit architectures. Now it's 64-bit architectures. So you have to think about, you take a 256-bit number and you break it up into chunks of 64 and you have to implement multiplication, modular multiplication algorithms on these individual words. So the size of the prime makes a really big difference. So 256 bits is the minimum size for elliptic curve cryptography. And then you can think of an elliptic curve over a prime field that's so large because if P is that large, then in particular, it's not characteristic two or characteristic three. So if you were to, if you, there were standards actually for using elliptic curve cryptography over extension fields of prime fields, so you could have a field of size that is actually the field of two to the 256 elements. In other words, not a prime field. But there are various insecurities with having working over extension fields, especially when this extension degree is a composite instead of a prime. So in general, it's just easier and safer to stick with prime fields, which in particular means you're not in characteristic two or three. So that means that any elliptic curve over this field actually has a short wire stress form, which you can represent just as Y squared equals X cubed plus AX plus B, where A and B are just integers modulo P. So a really nice thing about elliptic curves is that there's such a thing as a J invariant, which it allows you to label the elliptic curve. Now this is not so important necessarily for usual elliptic curve cryptography, but it's gonna be very important for us for isogenic-based cryptography. So the J invariant is just, as you can see, it's just a rational function in the coefficient. So I've written, I'm sorry, PowerPoint isn't that great at displaying mathematical equations, so it might look a little messy, but it's just 1728 times 4A cubed over 4A cubed plus 27B squared. So you may recognize the discriminant in there in the bottom and this is the J invariant. So there's a lot we can say about the J invariant. You can think of it as modular form evaluated on an algebraic integer that corresponds to the elliptic curve through kind of the CM theory, but not gonna talk about that subject in this set of lectures either. So the important thing about elliptic curves in cryptography is they have both this algebraic, they have both the geometric structure, meaning it being defined by this equation being able to think of it as a curve in the plane, but they also have an algebraic structure which is defined by a kind of a chord and tangent method for computing the group law. So many of you will have probably seen the group law where if you wanna add two points on elliptic curve, you simply take a line and pass it through those two points and look for the third point of intersection. So all of this, as I said, is just kind of a pre-discussion for what we're gonna talk about today because this is how the usual elliptic curve cryptography works. In other words, the elliptic curve cryptosystems that I said are already deployed and in widespread use today. And the way those work is like I said, they're based on the discrete logarithm problem, which means that there's a public point on the elliptic curve that's known, it's published, and then when you want to use the elliptic curve in some protocol such as a key exchange or a digital signature, you will take some private secret number that you know, let's call it M, and you'll multiply that point, in this case, the group law on the elliptic curve is addition. So you'll add that point to itself, M times, but nobody else knows M, only you know M. And so you come up with a point, let's go M times P is Q, let's say, and this point Q, so people look at the point P and the point Q on the curve and they have no way of figuring out what was M, and that's called the discrete logarithm problem. What was the discrete log in this case? So that's how you do cryptography, the usual kind of classical cryptography on elliptic curves. And in that setting, you'd never use super singular elliptic curves. I mean, I shouldn't say never, my mom says never say never, but the point is that after super singular elliptic curves were proposed early on, probably late 80s or early 90s, Meneses Okamoto Van Stone found the MOV attack on super singular elliptic curves, which takes actually the V pairing, or you can use any other pairing on the elliptic curve, and it moves the discrete logarithm problem on the elliptic curve into a finite field, like if the elliptic curve was super singular over Fp, then it takes, because of the super singular property, the V pairing maps the discrete logarithm problem into the field Fp squared star, so a field which is relatively small compared to the size you would use for discrete log in a finite field, and so you end up with a very insecure elliptic curve. So in usual ECC, we do not use super singular elliptic curves. So I haven't even said what is super singular, so it's an important definition from the mathematical point of view, but it's not very important from the implementation point of view. So you could give the definition of super singular as just saying if your elliptic curve is defined in characteristic P over some finite field of characteristic P, there's no P torsion points over, even if you go all the way to the algebraic closure, so that's one way to define super singular. Something that is a little bit more important for our topic today, isogenic-based cryptography, is that the endomorphism ring of a super singular elliptic curve is actually isomorphic to a maximal order in a definite quaternion algebra. So I'm not gonna talk about quaternion algebras today, nor the maximal orders, which are the endomorphism rings, but I will talk about it in the third lecture tomorrow. But I like my favorite reference for reading about, this is actually Joe Silverman's book, section 4.1, I think, so if anybody wants to look a little bit more about that before tomorrow. So an important thing is that isomorphism classes of super singular elliptic curves have a representative over Fp or Fp squared. So when we talk about, if we fix the characteristic P, some large cryptographic size prime, and we talk about the set of super singular elliptic curves, isomorphism classes of super singular elliptic curves, you can think of them as all of them having a representative defined either over Fp or Fp squared. So this is a little interlude, I'm not sure if Joe covered this last week, but I think if there's one thing that's really important, when if you're a number theorist and you're thinking about working in cryptography, the main thing to really internalize is what do we mean by a hard math problem? And so I learned that I should include this slide after giving this talk on super singular isogenic graphs to countless number theory audiences. And having very prestigious esteemed number theorists ask me at the end of the talk, essentially like why the problem is hard, thinking about a very small prime and like the intuition of what you would do if P were very small. And the point is that you can have a cryptographic problem like discrete log, like inverting the vey pairing, like factoring numbers, which is very easy when the numbers are small. Like how do you factor the number of 15? You know, the intuition is that it's not whether the problem is hard or not, or like in the sense of a lot of mathematicians think some problem or an area is hard if the theorems are hard to prove. That's a typical thing that mathematicians think is hard. Like there's a conjecture or an unsolved problem means that there isn't, it's hard to find a proof. And that is not what we mean by hard in cryptography. So specifically what we mean is that if it takes you m bits to represent the problem. So like I said, an elliptic curve. In the beginning I said if the prime is 256 bits, then what we mean by hard is that the fastest algorithm that you have for solving the problem runs in roughly, so exponential time meaning like proportional to two to the m. So the m is the number of bits and the m is in the exponent. There's a little bit of an unfortunate difference between the math and the computer science community in this terminology, so I like to be very explicit. So exponential time in m means that the algorithm runs in time roughly two to the power m. Whereas sub-exponential time means, well let's go to polynomial time. Polynomial time is really easy. It means the best algorithm runs in a polynomial in m. And in between that we have this category which we call sub-exponential. I'm just giving you an example of a sub-exponential algorithm here. This is an L one-third algorithm. Which just means that the algorithm runs in time which is e to the power, so some constant c times m to the one-third times log m to the two-thirds. That's just an example. Like a one-half algorithm would be just replace the one-third and two-thirds just with one-half. And also the constant is very important in this, in terms of actually practically setting up crypto systems. So here's the motivation for the topic that I'm talking about today, the isogenic based cryptography. That is the quantum threat. So many of you probably know that many teams around the world, many physicists, are working on trying to build a quantum computer at scale. So there are already small quantum computers that have been built that can handle computations on like dozens of bits, let's say. But it's still an open question whether they'll be able to build quantum computers at scale that can handle computations on like hundreds or thousands of logical bits, q-bits. But if they are able to, the fact is we've known since the 90s, the Schor's algorithm, which gives you a polynomial time method for factoring large numbers like RSA module i. And it can also be extended to attack elliptic curve, crypto, the traditional ECC, which is deployed today. So the running time for if M is the number of bits of your RSA modulus, for example, the running time for factoring, using Schor's algorithm for factoring will be roughly four M cubed. So as you can see, that's a polynomial time algorithm. And you'll require a quantum computer that can process like two M q-bits. So a typical minimum size for RSA these days, M is usually 2048, 2048. So that means you'll need a quantum computer that can process around more than 4,000 logical q-bits. So once that is built, Schor's algorithm will give you a polynomial time algorithm for breaking RSA. So that's not good. So same thing for ECC, except for M is smaller. For ECC, like I said, M is like around 10 times smaller. It's 256 bits. And that's because of the difference in classical algorithms today. We do have sub-exponential factoring algorithms for RSA. We do not have sub-exponential algorithms for attacking ECC. So there's a very beautiful history of the sub-exponential algorithms for factoring, starting with like the quadratic sieve and moving to the number field sieve and the general number field sieve. And those are very beautiful number theoretic algorithms in case any of you are interested in that topic. But for ECC, we have just M is 256 because we have no sub-exponential algorithms. And so the Proust-Zalko's work already in 2004 estimated the constants to be around 360 m cubed time for attacking and that you would need a quantum computer that can process six m qubits. So six times 256 is roughly 1500. So you can see that you actually need a small or only a smaller quantum computer to attack the elliptic curve system, which is from the classical point of view of the same strength of as RSA 2048. So just as an interesting note, because of that difference, you may have noticed if you follow these things that in 2017, NSA released new guidelines for using ECC in practice and they increased their requirement from ECC 256 up to 384. So you can imagine that their motivation for that may have been progress on building a quantum computer. Actually, I'm quite familiar with the standards for elliptic curve cryptography. As I said, I worked on them early in my career and roughly my first five years at Microsoft from 1999 to about 2005, which is when we started deploying elliptic curve crypto systems in Microsoft operating systems and spread into all of our products worldwide at that time. So 2006 was when NSA and NIST released the sweet B requirements governing the use of elliptic curve cryptography for all government contractors, which was the thing that really helped push elliptic curve crypto into widespread use worldwide. And these 2017 guidelines released by NIST, the CNSA requirements in case you wanna look them up, really started to pull back those requirements. So they noted that the adoption of elliptic curve crypto was no longer required for government contractors. So again, you can imagine that motivation there being the advent of the quantum computer. So NIST launched its competition in 2017 saying it would be a five year competition to select new post quantum crypto candidates. So here we are, it's 2022, it's five years later. And so let me just first, before I tell you kind of the punchline, first of all, competition's not over yet, but they did make new announcements two weeks ago, which I'll tell you about in a minute. But from a high level, it's kind of interesting to see what mathematics was involved in this post quantum crypto competition. And for any of you that are in a different field, besides number theory, I encourage you, you can still try to come up with new, hard math problems to propose for cryptographic systems. But the ones that were really considered intensely in this last five year period in this, even though NIST is a US government agency, the participation in the NIST public workshops was international from teams from around the world, submitting candidates and attacking and evaluating candidates intensely in a series of public workshops run by NIST. So one of the things that was considered very favorably was code-based cryptography, which was proposed first in 1978 by McLeese. So there's, you can create crypto systems from error-correcting codes, and the security of those systems is based on the hardness of decoding random linear codes. So that's the hardness assumption there. So analogous to factoring, being hard, or like I mentioned, the hard problem for discrete logarithm problem for elliptic curves. So here the problem for code-based cryptography is decoding random linear codes. But there's an extra assumption there too, and that is that you can disguise codes, linear codes that have structure as random linear codes and in a way that nobody can figure out the structure because you need that structure for decoding your, for being able to decrypt your systems yourself with the secret key. So a second area for post-quantum cryptography was multivariate cryptographic systems proposed by Matsumoto NMI in 1988. So just think of many, many equations in many variables, so solving some system, multivariate system of equations, but obviously which is non-linear, right? So we don't want to do linear algebra here, but non-linear systems of equations in many, many variables. So a third area which you heard about last week is lattice-based cryptography which was introduced by Hovstein, Piper and Silverman in around 1996 with the advent of the NTRU system. And so 1996, that's already more than 25 years ago, so just kind of keeping track of time here. 1978, that's almost 50 years ago. And so the reason I note that is that so far our best way of understanding the security of public key crypto systems is to, once the system has been around for a long time, you see the best-known attacks on them and you can kind of plot how the attacks improve over time, like you could see the improvement in the sub-exponential algorithms for factoring over time and then you kind of predict the likely security of these systems in the absence of some disruptive, like new attack that is not predicted. So lattice-based cryptography has been around for more than 25 years. And so the last candidate that I wanted to talk about is the topic of my lecture series here is super-singular isogenic graphs. So this was introduced into cryptography by myself with my co-authors, Dennis Charles and Eyal Goran in 2005 at the NIST hash function competition. And so that system is coming up on 20 years old. It's more than 15 years old. And it's now an active area of research in isogenic-based cryptography. So just to give you the punchline, last week or week before last, NIST announced fourth round candidates. So super-singular isogenic graphs are still being considered for standardization in the fourth round. So Psyche is the super-singular isogenic key exchange, which I'll talk about in my second lecture here. But they actually announced four cryptosystems for standardization, which is pretty cool. And three of them are based on lattices. And so I'm not sure if Joe talked last week about NTRU. But that is pretty cool. And I'm a huge fan of lattice-based cryptography too. So I'm talking about, the rest of the talk today will be about super-singular isogenic graphs. So I've mentioned all the hard problems involved. Actually, I didn't explain what the hard problem is for lattices. It's a little involved, and you heard about that last week, so I'm not gonna go into it, but I'd be happy to talk about it offline with anyone. So the hard problem in super-singular isogenic graphs is finding a path between two given nodes. So imagine a very, very large graph, and you're given two nodes in the graph, and how do you find a path? So I should say, what do I mean by a graph? So a graph in this context is just V comma E, a set of vertices and a set of edges. So vertices are just nodes, and you could think of them in space, but that's just your imagination. They don't really exist in any particular configuration. But the edges are just specifying which nodes are connected to which other nodes. So that's what we mean by graph. K-regular just means that every node has K edges coming out of it. Undirected means the edges are not directed, so you don't have to say that this edge goes from this node to that one. It's just an edge between them. And then I'm gonna talk this afternoon in the second lecture about what we mean by expander graphs and expansion property. So the important thing for cryptography is if you wanna rely on the hardness of finding paths between two random nodes in the graph, that it should be really hard to find an efficient routing algorithm. So I always like to give the example of a really bad choice for a graph. So you could think of the hypercube as a graph, which has all of its nodes, so let's say a hypercube in n dimensions. So each node is labeled by a series of n bits, zeros and ones, and the edges are, two nodes are connected to each other if they differ by one bit. So you basically travel around the graph by flipping one bit at a time. Okay, so now give two sequences of length n of bits. And now it's easy to find gazillions of paths between these two nodes, right? You just flip every bit where they differ and you can flip them in any order and that gives you many, many different paths. So that's a really bad graph to choose for this cryptographic system. You don't wanna choose that. So I wanna tell you a little bit about the application here that caused us to come up with this proposal for using super singular isogenic graphs in cryptography. And that is cryptographic hash functions. So cryptographic hash function is very fundamental for cryptography. They're used as a building block in almost every cryptographic protocol to maintain security. But they are, in fact, it's somewhat difficult to actually define the security property that you want for them. And we often use what's called the random oracle model. But this random oracle model is actually very hard to describe what it means. So in practice, what you actually have to do when NIST was running its hash function competition in 2005, you have to look at a bunch of candidates for hash functions and just try to see whether you can reverse engineer them or whether they have the properties, at least the minimum set of properties that you want them to have. And so first let me say what is a hash function? It just is a function that goes from some sequence of bits of some length, which I'm calling length N right now in here, and it maps it to some other sequence of bits which have length M. So for example, a common use of a hash function is as like a message digest where you have a very large object like a movie or something that takes as many, many bits and you have some kind of very short representation of it by mapping it into a sequence of bits where M is much, much smaller, like 128 or 256 bits. But an important property of a hash function is that it should be very fast and easy to compute, efficient to compute. When we say hash function, we usually mean it's unkeyed so it does not require a secret key to compute the output as opposed to like a Mac where you are assuming that the person who computes the Mac has a key to compute it. And the cryptographic properties that you want are that it is actually collision resistant and well, what we call pre-image resistant and that there isn't a bias in the output so it's essentially uniformly distributed output. So a hash function is collision resistant if it's computationally infeasible to find two distinct inputs that match, map to the same output. So in the message digest example, definitely there's gonna be a lot of things that do go to the same output if N is way, way bigger than M but it should be very hard to find those two inputs that hash to the same. And we say a hash function is pre-image resistant if given a certain output, it's computationally infeasible to find an input which hashes to that. So clearly if you can find pre-images then you can definitely create collisions so that's kind of the harder problem to solve. Okay so here's kind of the key thing that I need to describe is how do you create a hash function from a, well I'll just call it an expander graph for now but it's gonna be the super singular isogenic graph. So given a K regular graph, each vertex, it's very important that each vertex in the graph has a label. That's why I mentioned at the beginning it was very important that elliptic curves can be labeled with the J invariant. So what we're gonna do is we're gonna have a starting point for a hash function which is specified publicly and then when you get some kind of input, so the input is some long bit string, we're gonna use this bit string as directions for walking around the graph and the way we do that is to divide the bit string up into chunks and we process one chunk at a time and each chunk tells you where to go in the next step of your walk around the graph. So you start somewhere, you process the first chunk and you follow one of the edges. So another very key point is that you're not allowed to go backwards. So once you, there's K edges coming out of this node, once you choose and you go to the next node, you can't go backwards on that edge. So now you only have K minus one choices and so that's called backtracking, no backtracking allowed and then when you get to the end of your walk, the label for that node, that's the output for the hash function. So this was a general kind of hash function that we described in our paper with Gorin and Charles in 2005 and so this is kind of just a picture of the representation if you had a three regular graph. So after the first step, you only have two choices for each next step. So those two choices are just determined by one bit. So you just have a zero, you process the bits in block sizes of one. So one bit at a time, you just determine how to walk around the graph. So that's for a three regular graph, for example. And so this is kind of a very simple idea which we propose in general that you could create cryptographic hash functions from K regular expander graphs. The expander part, I'll talk a little bit more about later this afternoon. It basically is good for assuring that you have a uniform distribution of your output. It's not all that important necessarily for the collision resistance and pre-image resistance. Okay, so now we have the job of finding graphs where routing is hard. So I've already mentioned routing is easy in the hypercube and so that's not a good idea. And I'm gonna talk like I said later today about the expansion property in Ramanujan graphs. And what we're trying to do is avoid that it's easy to find paths or that it's easy to find collisions. So collisions would be like when you have an actual cycle in the graph as you can see here. These are like I said, undirected graphs. I'm just putting the arrows in there to show when you're walking, what your direction of your walk. So we actually proposed in our original paper two different graphs that you could use for this purpose. But the one that I'm talking about now is the super singular isogenic graph. So what is this graph? That's kind of the whole point of this lecture is to actually explain what is this graph. So the nodes are going to be isomorphism classes of super singular isogenic, super singular elliptic curves, modulo p. So you fix some p of large cryptographic size in our original paper and still today for this application of hash functions, p can be 256 bits. And then the isomorphism classes of super singular elliptic curves are going to be, each of them as I mentioned will have a representative over fp or fp squared. So you can label them, each of them, so the J invariant is an isomorphism invariant. So any elliptic curve in that class has the same J invariant. So you take the J invariant of your representative, which is over fp or fp squared, and that's the label for that node. And I mentioned earlier, it's very easy to compute the label given the equation of the curve. It's just this rational function in the coefficients. Okay, so that tells you what are the vertices of this graph, and now let me tell you what are the edges. So given an elliptic curve, one way you can think of that elliptic curve is as an abelian group. So I haven't told you a lot of the nice theorems that about elliptic curves that many of you probably know, but if you look at the group of points on an elliptic curve over like a finite field like fp or fp squared, it's actually an abelian group. And the nice thing is that what can you do with an abelian group? Well, you can actually quotient it by subgroups and you get another group. And so actually there's a nice kind of geometric theory behind this that allows us to extend from just thinking of these as abelian groups to actually thinking of them as geometric abelian groups. So I'm blanking on the paper, but I think it's geometric invariant theory by Mumford. Maybe somebody can remind me or correct me if I got that wrong. But for your purposes, and for the purpose of cryptographers, you can forget about a lot of that theory and you can just think of these objects as being abelian groups. So how are we gonna start from one elliptic curve and get to another one? We're gonna quotient by some subgroup and then we're gonna get some other elliptic curve which is gonna be an abelian group. And these are, so the size of the group that you quotient by gives you the actual degree of the map that takes you from the elliptic curve you started with to the elliptic curve that you end up with. So in this, it's very easy to assure that everything works because we're gonna let P be very, very large of cryptographic size and then we're gonna quotient only by subgroups of very, very small size. So for example, subgroups of size two. So the main application of super, the main instantiation of super singular isogenic graphs is when the degree is either two or three. So that's pretty easy. So what that means is you start with a starting point in your graph which is an elliptic curve and you just wanna quotient by subgroups of size two. Well, those are really easy to find. Those are generated by the torsion points of the two torsion points which are just the roots of the polynomial f of x if the curve is y squared equals f of x. So I'll show you the formulas in just a minute for computing the isogenes of degree two. But now that we have said what the vertices and the edges are for this graph, we have some very, very nice properties of the graph. So first of all, if you take degree L isogenes where like I said L is gonna be two or three in this setting, then these graphs are actually K regular where K is just equal to L plus one. So if you think of L equals two, this is very nice because you just have three two torsion points for each elliptic curve defined over the algebraic closure. They're not necessarily defined over the base field. That's kind of an important issue. But you have three two torsion points and so those are the three edges that allow you to get a new elliptic curve. And actually you can end up getting the self isogeny. It can end up being the same elliptic curve but we still count that as an edge. So we have a three regular graph if we take L equals two. We also have some very nice theorems. We can make this into an undirected graph if we assume that P is congruent to one mod 12. These were actually the first proposal for Ramanuj and graphs by Pizer were described in terms of quaternion algebras and under the assumption that P is congruent to one mod 12. But in cryptography, we're actually ending up using the graphs even when P is not congruent to one mod 12. I'm leaving aside a little bit of a technical issue here which is that we're, because L is small, L is two or three, these isogenies are going to be separable because they are co-prime to the characteristic which is very, very large, as I mentioned. So the formulas for computing isogenies were actually, and the equation of the new curve were actually given by Vellu in the 1970s. So as you can see here actually, again, these are very, very easy to compute. So if you have a two torsion point Q, which is represented by R comma zero, R is a root of f of x where the elliptic curve is y squared equals f of x. Then you have a nice equation for your new curve E2 which is just in terms of these coefficients A and B of the original curve and R, the two torsion point, and you have a nice representation for the map between E1 and E2. So now I think in the interest of time, I'm going to just end up by showing you a picture of these super singular isogenic graphs. So maybe I should have done this before I defined them, but this is, I think, something that is very nice to remember from this lecture is kind of a picture of what do these super singular isogenic graphs look like. So in roughly 2000, like I said, 2005, we proposed super singular isogenic graphs at the NIST hash function competition. And I spoke about this proposal at a lot of different kind of number theory crypto conferences, including ECC and things like that. And I spoke about it at JMM, the joint math meetings in 2008, and Dana McKenzie, who's a science writer, was at my talk and the hash function competition was kind of finishing up around that time. And so he wrote an article in Science Magazine about hash functions and in particular our proposal for super singular isogenic graphs for hash functions. So we actually created this picture for him. We took a prime, which was about 2500. So it's a very small prime from cryptographic point of view. I think it was P equals 2521. And so one thing that I maybe didn't include, I'm not sure if I put it on a previous slide, no. So we have what we call the Eichler class number, which tells us how many isomorphism classes of super singular elliptic curves we have. And it's roughly P over 12. So again, that same section of Joe Silverman's book on elliptic curves gives you the exact formula, greatest integer part of P over 12 plus, either zero or one or two. And so if I have a prime of roughly 2500, not 2500 bits, but 2500, 2521, then the number of nodes in the graph is gonna be roughly P over 12. So somewhere around 200 nodes. So there's about 200 nodes in this graph, all smushed together and it looks really kind of ugly and it's hard to even figure out how to represent such a graph. And the point being that if you look at this graph, and like I said, this is very small from a cryptographic point of view. And you take two nodes in some random parts of the graph, it's very hard to see how to get from that one point in the graph to another. And that's the whole point of the hardness of this underlying problem. So just to illustrate that, we took two points that are the end points of the blue path that you see and we showed the path that you find there. And you can see there's no sense of orientation. There's no sense that if you go in one particular direction, like one particular step that you'll be coming closer to the endpoint that you want. We also showed this red path is a cycle in the graph which as I mentioned is important for collision resistance of the hash function. And cycles, again, are very hard to find. So the point of, I guess the point of this picture is to try to give you an idea that these are very strange graphs which don't have a natural kind of orientation or routing in them. So this afternoon what I'm gonna do is I'm gonna talk a little bit more about the actual expansion property of these graphs which is important for being able to get kind of a uniformly distributed output for your hash function. And I'm also gonna talk about the key exchange that you can define from these graphs. So I think, so I've just left about five minutes in case anyone wants to ask questions. So I'd be happy to take questions but otherwise you can also save questions for the problem session. So thank you very much. Questions I see.