 I'm Jake Massimo, and we're talking about some joint work with Martin Albrach, Kenny Paterson and Uri Soromarski, which is Prime and Prejudice, Primality Testing under Adversarial Conditions. I'm going to start with a couple of demos. They're going to be quite quick because I don't have much time. First we're going to look at GNU GMP, and we're going to look at the Primality Testing GNU GMP, which takes two arguments, the number that we want to test, and reps, the rounds of testing that we're going to be performing. So we can kind of think about this reps value as essentially how many millarabin iterations we're going to be performing. And in documentation we're told that reasonable values of reps are between 15 and 50. So here we have a little script just to do a quick demo. I wanted to do this live, but I couldn't. So we're going to have to use some screenshots and our imagination. So here is a 1024 bit number that we give to GNP. We want to test the primality of up to and including 15, the minimum rounds of testing that is advised. So here we see that each time GNP declares that this number is prime, even up to the 15th round of millarabin. That's great. So let's take a closer look at this number. Well, in fact, this number isn't actually a prime. It's a composite. It's made of two larger primes, more explicitly. It's of the form n equals 2x plus 1, 4x plus 1, where x is some km plus 189, where m's are approach to small primes and ks and buffer to get it to the correct bit size. What's interesting about this number is that this result is actually deterministic within GNP, so this number will always be declared prime by GNP when using this amount of reps, so less than or equal to 15, the minimum recommended. Cool. So let's look at another example, OpenSSL. So here we're going to be looking at the equivalence of primality testing function in OpenSSL, which again takes a number that we want to test the primality of and checks the amount of rounds of primality testing we want to do. Now OpenSSL are nice in that they actually give us a function which takes the bit size of the number we want to test by default and gives us the amount of rounds of testing we'll need to do to achieve an error rate of less than 2 to the negative 80. So this is what happens by default. Of course you can pick the amount of rounds of testing you want yourself if you want to. So another quick demo. This time we have a 2048 bit number up here, this big one at the bottom, and we're going to run it a couple of times to see how it measures up. So firstly, OpenSSL correctly declares a composite, nice one. I'm going to put the same number in again and it's composite. So the error rate was 2 to the negative 80, right? So how long have I got? Let's try it one more time to see what happens. Okay, nice. This time it actually declared it prime. So this is pretty sweet. In fact, we got a little bit lucky here, but OpenSSL will incorrectly classify this composite number as a prime one in 16 times. So this work comes from some wider work that we've been doing on the implementation landscape of primality testing in cryptographic software and some mathematical libraries as well. For each library we've been looking at what sort of primality tests are being offered, how they implement these tests, and how these tests handle adversarial input rather than the random sort of input that they're set up to handle. So a real quick overview of what we did. We basically documented the failure rate that these probabilistic tests say that they have. And we compiled these into a table and we also compiled the highest rate of failure that we could get ourselves. So we can see highlighted OpenSSL we have one in 16 times. We also have some 100% failure rates for GMP and some other small libraries like JSBN, CripLib, LibTomLib, CripLibTomMath and WolfSSL. Thank you for listening. If you want any more information, the paper is on e-print or just come and talk to me. Thank you.