 for their talk about high assurance crypto software. Thank you. So why is high assurance crypto software a thing? Why do we worry about the correctness of software or the quality of software? So just some recent news results about, well, crypto getting broken so badly that the private keys leak. These are just reports from October and November. And this one nicely adds upon saying timing is everything. So these were some timing attacks which completely broke elliptic curve-based cryptography so badly that the private keys came out. Timing attacks are not a new thing. So back in the days when you're logging into a server and that is looking at your password, it might be doing a character by character check. So for instance, you're starting by, well, let's find out what the first character is. So you're sending AAA, well, BBB, CBCC. Now, of course, none of this will actually work. It will be very surprising if this was the right password. But you do observe the time it takes to say this password is wrong. And then you notice that CCC takes a little more time to fail. Then you try to CAA, CBB, CCC. And it still fails in the old tape about the same time until you're getting to C2O. Until the moment when it takes a little longer time to fail the check. And in the end, you get a password in Congress. It was 1974 when this exploit was developed by Al-Nam-Bell. And you could log in, and it would be just doing this character by character check. So time would make it a way to log into your privacy, or basically have enough time to do time. But, of course, things are getting more into the CCC. When you come to the world of cryptography, it's a more subtle moment of timing of attack, of time. Then each time you have to come to the CCC, and then you go back to your crypto one-on-one lectures, and you go back to CCC times. But remember there was a square in multiplier algorithm which tells you, OK, you look at the limits of the exponent, and you see I'm going for the end of the stage of D in bits. And then I start by initializing your message in RSA, and then for each step you're squaring, and each bit is set, you're doing it in this one square, while the foot height is a bit till all the way to zero. So this is the end, so this is the end. And then you output the psychotech. Now, there are some problems with this. So if you're an attacker and you know that there is a D-denture, and you also have a friend that gives you some information on the length of the bits, this L was defined as, well, how it is, it's D. And that's not the case where this D looks much shorter. This is much shorter than N. It's much shorter than N, it's much shorter than N. And this is an unusual D, and this is a short one, and also here we have a branch. And this bit is one, and not one. So to pick out the length of a fine-grained axis, somebody could even see where I'm going for all the two more on the next squaring. In the worst case, I've been looking at all the pattern patterns by just knowing that I'm going on the next cycle. Now, if you're a remote attacker, you're only seeing it over all the time. Now, I think that from something similar, it's not an expansion for RSA, it's a lot to curve scalar multiplication. This is probably the biggest TPM fail. And there was a hororities to dodge the total not a lot of curves in scalar multiplication, so you've done for this square amount of time. Now, if you have some leading zero bits, it's much faster, and if not, because your D is very sparse, it could be just one and a whole bunch of things, and then another one that would also be as fast as possible. And if you have some leading zero bits, it's much faster, it could be just one and a whole bunch of things, and then another one that would also be as fast as possible. That would also be as fast as something which is a few bits shorter than the other ones. So you don't know exactly whether it was short and then it was fast or whether it was sparse, so few multiplications, and therefore fast. But it's not too easy. It's not too easy. And therefore, you don't have multiplications there, so if you're very fast, you're very short. So there's strong dependency on the length and there's very strong dependency on the length and also on the density. I was surprised that in this article, that this example, when you go through this bit with a bit, and when it's very precious time, and you want to divide these things into more than 19, we're going to use the multiplication, using the window, 36.3. You want to save time on the number of multiplications and then you would be grabbing two bits at once. You would be grabbing two bits at once. You would compute a few values, you would compute your C, and then compute this exponentiation, you kind of doing this first one, you do a few things at once. The first one, you can skip, because it's both zeros, and then you start in one cube, so we start in a C here, so it's nice enough if you compute it. Now, we're moving on by two bits, so instead of squaring one, we get twice. So that's why we're coming here next position is one, next position is two, C coming into fourth power by C squared. So we're moving in fourth degree. Everything is like in the previous loop, except for doing it with it at a time. And so instead of saying, oh, if the next bit is said, we do a multiplication, we look at the value of the next bit and then the sum of the square to C squared to C squared to C squared to C squared to C squared to C squared, and multiplication, and this way we only take four. It does change a number of squares. And now, we're doing four This one smoothened out the space of having a sparse integer, because, well, as long as there's a single one everywhere, well, on the window here, we just want to put them on the page, whereas normally it would be, so to speak, and you may be able to trace it, but it's going to be in boxes, where there's some things in the beginning or not, because we take four bits at a time, and there are 15 cases out of 16 where you have to do multiplication. There are also 50 cases out of 60 where you don't need a multiplication, or you don't need a multiplication. There's also a problem with the length. I mean, how much does these few bits do? Of course, you don't think you see how it's an exponent, you don't want to think you have an effect. Of course, you don't want to read it, you would be able to not even notice it, you know. I mean, it's actually more extremely sparse to actually read it. Worse, if you're having obstacles, trying to read it, you're going to lose. If you're using the Chinese theorem of obstacles, a few years ago, a few years ago, a few years ago, there was an article about how you can collect some information of D-model Q, D-model Q, D-model P, and collect them together, and you can find even more information. If you're using DSA signatures or ENC DSA, and this is such a thing that you want to get a multiplication only one time. So you're going to generate a number, you pick a random number, you do an exponentization, you do something else with this number. And then you do something else with this random number. These one-time exponents, these one-time exponents, these numbers, for instance, if you know that the top four bits or the top four bits or the top four bits are the first bits, then that's it. So you know the secret key. It's a very common system, and it was exploited in the two pages. It was exploited in the previous articles, TPM Fail, for example. And in TPM Fail, they showed that you can use the implementations of TPM for typical implementations in your TPM cryptography in your computer, and then you can get the keys out of the TPM. And then you can get the keys out of the TPM. The other article I haven't seen in the paper yet, but they have a very nice and informative web page, Jan Ganga, Peter Svendor, and Vladimir Steadlach, but they didn't see it for smart cards. So this is a really bad attack, and it's something really bad. Both the TPM and the smart cards for the signatures, they have been tested for this before, but the smart cards had to be tested before they were used for the signatures, and such a huge effect. There are more to it, so this is basically a system of all timing, so there's already lots of libraries out there. Do you think what TPM and smart cards are on the side if you have hyper-trading, if you have time attacks on Cache, if you're doing the look-up for the TPM before something, then depending on this entry, which is in Cache or that entry, if you're learning this or not in Cache, you can even restore an increase in these leaks. This should be a constructive talk. Let's see how you'll fix this. So one thing is, for all our crypto-inventations, we actually know kind of an upper bound. We know that our RSA keys, or know a bound on the end, people pick the 0.8 or 0.9, so we have a good upper bound on the top of this D will be. Why not just use that? We use the length of this cycle regardless of the length of our key. Before we start initializing the message with the same thing, then we can do a new message. Let's initialize it one. If you initialize it one, you'll see it's the same thing. You can do the same with other bits that are in D. We can add padding to the length. We can do a cycle with a constant length. And we do the same thing except we don't have this if-else. I mentioned cache-timing attacks and in general, we're not thinking how many bits there are, and you might be more depending on how many bits there are. And so what we're going to do is we get up on a bit of performance, we do a bit of computation, and we do a bit of multiplication not just to get 0, which of the two to take, the one which we just computed, the H here, with multiplication, but for those who don't need multiplication and don't want anything else. Because the attacker can look at this whole thing and understand what's going on. If the bit is 0, then I'm getting 0, so I'm getting 1 minus 0, and then this bit is 0, and then this bit is 0, so for 0 bit, I'm getting m, I'm getting m, and for bit 1, we're getting H. 1 minus 1 minus 1 is 0 times m, and it turns out to be just H. This small modification to the code comes at a bit of cost, because it's always at the length, so I'll do something about it. Because our cycle has become much more, and we do multiplication for each bit, not just for 0 bit. I've been saying elliptic curves and I've been saying that elliptic curves, and when you do RSA or when you do Diffie-Hellman, you multiply by a generator. So that's cool. Now let's see how it works with elliptic curves. You were here five years ago at 31 C3, then I'll be giving a talk about the previous years. I've told you about the X-chopper curve in which there's an additional point of infinity that can be very useful. We don't have any cool formula to say that there's this exact point as an exception. So we can't distinguish the curves from the Diffie-Hellman curves. There's sometimes a curve with advertising Edward's curves and Montgomery curves, so I'm just going to tell you about this nice, stylish shape and you have this pointed 12 o'clock and you can just do it. So for Edward's curves, everything is nice. For example, there's a popular curve 255.19 which is used by your browser and so on. They use one doubling and a folding. It was a joke about mathematics. There's an addition and a doubling. You use additional doubling instead of one folding. The reason why 255.19 uses Montgomery curves and Montgomery curves is because it's constant time and it's done pretty fast and it's not with constant time. There's additional privilege of using algorithms with constant time because some things you will not enter an infinite loop because you can't do what you recently said. So now we've had bug, bug, bug in Windows and I can't believe that yeah, okay, it's time to get back to the loop and sometimes people do believe like OpenSSL, they've got a few subroutines which are late in the business and time code. It's not pervasive by any means but some people say they're trying to do something with time and is that true? I mean people make claims about crypto all the time, I don't know if it was to do something that's bad crypto it is bad crypto and it's been treated with a claim that it's great crypto. RC4 is a crypto you want to use and so if you want to use it we're going to try and carry it but eventually it was leaked and then everybody had RC4 and it's gone forever and it's gone forever and it's a great IPO and it's very interesting it's not a good site for you don't use RC4, it's a bad SC4 so this example actually uses RC4 I'm going to maybe look at it so let me first look at what this code does and say what the algorithm is doing. I'm going to sort of the start of the secret crypt using RC4 and when you do it you see it's running around there's some maybe a key and maybe you want to use it for memory and so on 32 is a reason for future and then the amount of key will be ignored if there's no key then at the end of this thing we're going to be free so this maybe a work in progress program but okay there's some space for a key and then it's based on their key extension to what OpenSSL calls this RC4 key and then there's supposed to be a structure that's what RC4 key but this program is still something you can compile and the compiler would not allow the compiler to add some optimizations and pushcodes and you don't realize that this program is doing nothing you think it's doing nothing but I don't know crashing your system and the compiler compilator will call this function and then you use this program under Valgrindom and run SanitizerPameter to find memory leaks and you can see in the previous presentation about fuzzing if you don't dive into compiling you don't need to compile anything you can just run Valgrindom and it'll run this program and then it'll call the RC4 key and it'll bring all these functions and it'll do that and when it's going to do it it's going to follow since the malloc is setting aside this data space because it's not supposed to be available after that and Valgrindom is trying to keep track of what your pointers are pointing to and that is you're going to have to do that and one of the things you have to do is suppose that you have some uninitialized data and that is are you using this in a better condition or a better condition if you're using other problems Valgrindom will give you the malloc uninitialized key and Valgrindom will follow this inside RC4 key and then it'll complain that you have to use this tool and then it'll say it's on use of uninitialized value and you've been doing some array access access to the object and that's what it means it makes sense that Valgrindom would say oh I don't want to continue trying to it's trying to figure out if you're accessing some wrong spot in memory and if I is uninitialized then Valgrindom will say no and then it'll produce what not exactly and then it'll produce what not exactly uninitialized data and I think that's exactly what you need to do to check that you're using constant time code to check if it is anything derived from the key is that being used for a branch or is it being used for which could be so if Valgrind says there's a problem with by throwing way RC4 or and if Valgrind says there's no problem and if Valgrind says there's no problem then it's a happy talk after all so we have constant time explanation scale modification and a few other conditions so one of the things is to be able to I mean in the end you have to implement your computations on the elliptic curve or you implement your arithmetic model or RSA modulus and you have to do it and then again check that Valgrind will use it's going to follow through so Valgrind awesome tool Valgrind is a great question and one of the things that Valgrind can do is to have a processor because the processor that I'm going to do division I'm going to do division I do this in one single clock cycle but he can say oh I can do this for one clock because I can do it faster and it's possible for one addition it's like optimization it's all normal but what about other processors for example ARM Cortex M3 and we wanted to implement cryptography or elliptic curves and on 64 bit outputs twice more than the size of the register and it's possible to take from 3 to 7 cycles depending on the size of the register and it means that it's possible to take up to 7 cycles depending on different values and so on and so he came up with a certain block scheme that shows how many cycles it will take between this block scheme it tells you how many cycles it will take depending on if there are special operands or more or less 16 bits and so on and it's just a situation with other architecture with other processors we decided to do constructive presentation and then we talked about it and realized that it's all broken and it's not just that the tools don't do what we want them to do it's not just that the tools don't do what we want them to do it's not just that there's all these vulnerabilities to timing attacks and period attacks and suddenly there's more CVs which not just timing is going on it's not only timing let me explain this CryptoMemCamp CryptoMemCamp it's not just a function it's going to be very bad if it's going to be a check in the open association and they made a constant time function which is probably a constant time comparison especially for this and the implementation of architecture and risk in this function there was a problem that it was effectively reduced to just one bit in every byte because on PA risk someone in the room used PA risk in the room and I see at least 10 hands in the room it's not the most popular processor in the world but it exists and you can write a code on the assembler you can even find such machines it's not crazy that openSSL does assembly code it's all good but in other case people can also share let's look at the importance of this it allows to change the communication which is let's see you have a message you have 16 bytes and you have something at the end of this message and you have this function and it checks what's coming in from the tag and then we try we try to use this function to compare 128 bits we can compare 1024 with the other 1024 byte and it compares 16 bits instead of a 2 to the minus 128 instead of a 2 to the minus 16 128 we lose security it's slower it's slower it's much less secure classic British explanation let's focus on Intel I don't think anybody is trying to preserve I don't think anybody is trying to preserve but Intel made a new instruction and at some point Intel made this instruction and using AVX2 there's an implementation of 1024 bits and the code from July 2013 was discovered in 2013 which was a bug in this bug what's the consequence of this bug let me guess they say attacks against DH-1024 are considered 1024 bits now it's more likely and the questions are as much time as you can or as much as much hardware it means a year on a cluster or it can be done on a home computer they say it's not specified in the advisory they don't answer the question they use DH-8 and they also use the same sub-routine and then they use the same sub-routine against where I say 1024 as a result of this bug would be very difficult to perform and are not believed likely they don't say there's a bug and there's a bug that you can't do and I mean this notion of wrong it's a different crypto system it's a new crypto system slightly different or a different RSA 1024 I don't know maybe it's secure and they say has anybody really looked at this? normally if we have a new crypto system if we have a new crypto system we go through a lot of reviews it's a long review it's a long review or a good review it should be much bigger mostly much better but if people were using RSA 1024 if people used 1024 RSA RSA should be there there should be some problems results if someone looked at the radiation and we turn the system and if they can't do anything and we figured it out If you can't break it, everything will be OK. And a few weeks ago, there was another bug, another advisory, CVE-211. We didn't think we would talk about it, we just didn't want to laugh at it. We don't know what's going on with these bugs. We just don't know the consequences of these bugs. Here's an example of a piece of patch for this bug. Yeah, the code before already has it. Is this correct? The patch adds some lines saying correct. Well, that's clear enough. Yeah, this patch adds just correct. You know, you've got these top hundreds of cryptographers around the world fighting against them. You've got a lot of cryptographers who are fighting against these quantum threats. They'll protect you from the threat of quantum computers and everything will be OK. There's Falcon, post-quantum cryptography, post-quantum cryptography. And this one, there was an announcement in September which said from the author of Falcon that the consequences of this bug will also make the servers have to do the correct software with the right signatures, but all of the importations will be in the private keys that were released, the same bug. And we had the same test bug. They all worked in parts. They were doing the same thing. They had the same test vectors and they used the same algorithms. Which presumably has lower security and then are we going to have less security? Less security, and the author also commented that the fact that the traditional development methodology being super careful has failed. Everything broke down pretty bad, the traditional methodology. What can we take from here? We can make some conclusions. If you think about elliptical curves, special points that you need to think about, those make your software more complex. And it makes the software much more complicated. If it's a new system, and it hasn't been used for so long, there are some things that are long such as RSC and so on, such as SIDGE, Rishito and so on. What you want is to remove the leak by entering the arithmetic code. We don't want the code that has the arithmetic instruction to be more complicated. So, such on counterpoint, if the bit is 1, do this, if the bit is 0, don't do anything. If the bit is 0, then in case of arithmetic, it's much more complicated. People were trying to make the comparison so that they didn't leak the value of data. We have less studies on how to implement security. After Falcon, they weren't familiar with these problems. Another problem is that we have this drive for speed. We also have problems that we want to have a drive. So, we have to be careful. Very small code. We have to be very careful because there is a code that is performed very often. We really want to optimize it completely so that it's much faster. We want to squeeze it here. So, that means more error-prone as well. It's more compact. But the problem is that it will make a lot of mistakes. You may have different processes with or without AVX2 and for each processor for which you write the implementation, not only the reference code you have to take it into account. For Kitchak, Kitchak only exists for like new platforms. They still have a new platform in the library. But they also made more than 20 implementations for different architectures. So, if you have a Kitchak code, it might have just encoders A7 rather than the ones that have AS support. If you have a cheap smartphone, then most likely you have and that's why you may have some problems. Without the hardware support it would drain your battery. So, I would keep a roundabout of things specified by the NSA, the specCypher and they put it on them to make the thing that was satisfying the speed requirements. And then this, which after some public outcry in particular Jason Donfeld did a lot of work. They did something better to get an implementation to have a recently designed combination with the next Chacha. So, they did a lot of coaching to get a lot of video with us. Maybe it's probably one of the most complicated implementations of all this complicated math. Maybe that's a complicated math. That seems to help. So, that's a comment from down here from 1910 that was from the book from 1910. We'll follow after we've defined addition that one plus one equals two. And this is on page 379 of this book. But when it comes to me, I don't know if one plus two equals three. That sounds more complicated. But I think one plus two equals two. All these details. It looks like a code for math. People have worked on this over the last several years. And they actually recommend there's fans of this who say that you should go through this pain. You should spend time in our community to write through 300 pages and understand that everything is correct. Not just to convince yourself and some friends, but you should make sure that everything is really, really correct. And it's just a computer that does some automatic checks. And it's the right thing why you have to carefully define that, specify your information, and really prove that everything is correct. If it's checked, then everything should be fine. These tools, they didn't work, but there's something just to give some context. There's something that mathematicians don't like these tools because they're such a pain to use. Nevertheless, they have enough fans that some amazing people use. This is something that's like evercrypt. And they have a crypto library which has implementations of all the cryptos that they have as long as they don't have any of those nested verbs or many of those implementations. Well, here's the thing you support. It's got the list of what they're using, some symmetric code and some encryption and so on. It's probably enough to use HTTPS and yes, you need some kind of AS support process, but maybe something else. And there's proofs. People have actually done the evidence that they took some tools and made some tools for analysis of this P.O. Which is like, okay, that's a serious guarantee that the positive side of this is that any code that we see for cryptography, that it's really saying, yes, and you can see right here that the whole compute the right output for every input exactly what is specified. So you can see that the specification is correct, that we get a specific input and it's likely to be correct. But it also should be a separate question how you check the processors, how you check the compilers and so on. The only problem is that it's very difficult to do. It's just pain. And the more architecture, the more for every implementation it's much more work. An example of how difficult it is you can look at the list of what they use for Intel chips some fast implementations of some of the functions. But it's not something that's passed to some of the owners or to some of the owners say big laptop. Then no, EverCrypt doesn't give you fast implementations. It doesn't have some big bugs, but it's going to be very easy to do these operations on old smartphones because it will take a lot of hours to prove it on new platforms. Of course, you check all these things. We've already spent an entire hour on how many people here ever had this feeling of oh, it's a pity I could have done much more tests. I see pretty much everybody in the audience. And now, how many people ever had this feeling of oh, it's just many tests. I see, okay, maybe there are thousands of applications. Sorry? Who is in both camps? People who thought they tested too much and too little? Yeah. Of course, testing is fantastic. You should test every thing. You have to really do these tests to make sure you have too much cohesion. You should be trying to model your code to cover these tests. Then you will be able to see when you have tests, then you have code. It's a very bad function. But let's take, for example, CryptoMemCamp and see what do the bottom bits match. That's the thing which gives the wrong results for one out of two to the 16 inputs. Just random inputs will give the wrong results. So if you're trying to do just random inputs and try to do this the whole buzzing philosophy says let's try to smartly choose what you can and try to try and try to figure it out. And then there are very quickly finding the CryptoMemCamp out of the process. I need to try to figure it out as quickly as possible. This was actually implemented in the Super CryptoCamp in the first set of tests. That's just well the group that opened the cell. That framework so while we didn't catch the bug but it's like we almost could have better organized testing effort. If we would have better organized testing we would have found this bug. If you find yourself thinking how can I make tests how can I make some automatic test to catch this bug to your regression test suite and make sure it never happens again and then it's returned to the test suite and it's sure enough that they didn't happen to anybody again. And that's well the main thing about strong testing is that you're not the problem with testing is that you're not testing all the time and millions of security goals which are the entire group which you never thought of testing for. You never thought that you would be not by fuzzing and it's not hard to catch them with fuzzing. There are some secret inputs that you can say, I can do this, I can do that and some very special things. How do you deal with this? How do you fight this? This is very effective for example, in November of the year 2019 we were asked for a curve 448. There's a bug that randomly happens with probability 1 out of 2 264. There are a lot of tests that take a lot of effort to set that up and it takes a lot of time and effort to get could an attacker find those inputs? Well, it's here some inputs which make this operation fail inside the target and the attacker can find inputs to the whole crypto should there be more analysis of how damaging this bug is? Or should we just get rid of this bug and that's it? Since it's a very small probability no one can find this and so they say you have to prove correctness. You need to prove correctness of your algorithm it's not so simple. You can find real bugs or make this approach which is much more complicated than just a method. Here's an example on the left side here we'll look at how it audits not like on the test. On the left side here there's an Intel crypto function MAMCAMP you can see maybe you can't because it's too small but you read and at some point you see there are some calculations which are only for a certain length of the operands for each length you see what this code does we compare X, X0, Y0 XOR to X0, Y0 X1, Y1 and so on and we have three bytes XOR and why do we do everything as OR and if everything is in 0 it will result in 0 and if there's a difference in these bytes then we convert we convert it to UN64 we do matrices we move through some logic and you can see everything is correct you can look at the assembly and look at the graph that everything shows how the input is and then you analyze this graph and you say, yes, yes, yes this code works correctly and you do this for each graph there are some tools that make it much easier let me tell you about ANGER it starts from the binary and then builds a lot of extra and then builds a lot of extra what ANGER does is that ANGER does a lot of extra work it does that for you automatically it's your binary and after the instruction you learn what I did for your input arrays XOR is a graph from this is your code because you don't have to deal with a complex assembly and so on there's no jumps it's just completely no memory there's a lot more simple instructions, there's no jumps there is a branch which is that if you have a glitch if you have a glitch that if there's two possibilities if you have a bitfilling I don't know what to do and say well you can cut it into several pieces and say that they are the same probability but this is what we need maybe we have some loops maybe we have some cycles which are based on public information ANGER we want to protect from timing attacks and it gives you the unrolled code very quickly sometimes ANGER even checks the correctness of this code and says that it works there's only 9 minutes left so I'll try to quickly tell you this is a simple call to CryptoMemcAMP which you determine all your inputs X and Y X, Y Z and the compiler doesn't throw out this code maybe you need Z and so on maybe it's going to do something with it so the CryptoMemcAMP will be called and as ANGER works we take this binary pass through this ANGER but instead of having X, Y let's replace these places in memory that's going to be something else a variable we don't know what it is and then you run the program and then you calculate what's happening in the last lines here after ANGER's run through the code you can just ask it do you think it's possible to possibly happen and there's some automated tools called Cryptosolvers which can sometimes answer this question they might be able to answer this but they don't necessarily need a lot of time and the last line here what are what are things we're trying this approach working on you can always do it you have constant time code another tool which will do this for you which I haven't used personally but Manticore that I've worked with ANGER works just fine although it has this cool GUI called ANGER management ANGER it'll always convert your code to something else and then all of the interesting problem at that point is if the SMT solvers aren't smart enough to see that a lot of code works and if the resultant code works they'll need to look at new tools that will look at your reference code which you've reviewed and you sure it works and then you want to see and you want to see these implementations do the same and you want to put them together yeah, yeah, yeah that's why they're here and the whole thing is you need to do a certain tool for this I'll give you a quick example of how you can do this for example sorting code when you're sorting massive but you can do three times faster than Intel's integrated for Intel's library where they were sorting and it's optimized for Intel three times faster and it was proven that it's coming out right and we missed it was passed through ANGER and then you should say yes, yes, yes, nobody's claim that it's verified which is most crypto code you can just take random crypto code and pass it and check if it's right if it was stated that it's coming out all the time or not, we take we take we put it in our ANGER which is doing the same computation and see what's going on and we compare with reference code which is written on the scripting we'll do that matching for you, it'll tell you how much crypto computation is done but it's actually fun that's a great thing to do compared to all the other ways of doing it it's followed by analyzing the DAGs it's fun, it's actually a fun way to analyze at this point we'll be happy to take questions so thank you for your attention thank you we'll do a very high speed Q&A we'll do a very quick Q&A questions not comments just questions, please first question I think it's a very short question is constant time really the only standard time is the only mitigation against attacks? it's a mandatory thing you can do more you can do randomization but you want to have independent data which you can do second question thank you for the report any approaches from real-time operating systems operating systems they work real-time says you want to say that some operations are done for a certain amount of time but the attack before the end of the calculation you can get some data that can be useful in the real-time context but if you have constant time then it's much better thanks for this amazing talk I just have a question you said that the Q&A was slower than the Q&A it's slower than the Q&A with my knowledge the Q&A 255 is one of the biggest on ARM it's slower on Intel it's it's faster than the ARM alright internet for formal verification of evercrypt it's actually constructed it's actually constructed it's actually constructed it's actually constructed thanks for the question you said compiler bugs do you think that the bug in anger can close your bug? in the situation of testing you have an original code you have a framework for tests you want independent tests and in the end if everything is proved you can imagine that this code is correct and all of python needs a lot of Python code and some other time and so on but there are bugs in anger but there are bugs in some risk you reduce the risk of mistakes in your cryptocode the question is internet there is progress what's the final check if it's random? that's enough of an answer thank you microphone number what's the status of pairing friendly arithmetic compared to other curves? it's a little more complicated it's much more difficult if you look at tpmfail there are the same issues there inside tpmfail there was a code that should have been tested but it's a little bit like an explanation the same tricks will work and obviously this code wasn't constant time last question what about scooper scaler processes it doesn't break your algorithms or some distance doesn't have any effect you want to have some isolated data some isolated data which is copied out of that and you want to make sure that nothing was copied from this environment safe environment but if you have a scaler then you have it's not based on the data that you have if the processor is not based on the data the instructions will be launched then it's fine if it's different then you can get this data inside this safe environment thank you very much for the wonderful report