 for the introduction. I'm not feeling very well today, so I'm sorry if you can't hear me very well. I'm Tai from Google Security, and I'm here to present to you Project Reachable. So at Google, I have 15 minutes, so I will talk very quickly so that we have some minutes for the question and answer. At Google, we use Google in many, many products, and I think this image captures how we do product crypto at Google very well. So most of the time, a product team wants to do some, want to implement some security or privacy features, and they come to us and ask us to develop that feature for them. So what we do is we look at the feature. We develop a protocol for SAM, or if they want to do anti-encryption, or they want to do storage encryption, or they want just to want to size some URLs. Then we design and implement this protocol for them, and the protocol are usually built on top of internal robust APIs that we develop in-house. These APIs basically implement the common crypto operation, like authentication encryption, signature, or hybrid encryption. And these APIs, in turn, abuse based on third-party open-source crypto libraries like OpenSSL, OpenJDK, Pysicacho. I named these libraries just because we use them. It's not because they are better or worse than all the libraries. The problem is we usually find that these libraries usually have a lot of bugs, and these bugs happen for a long time, and they repeat themselves very frequently. Like there are bugs that should be fixed 10 or 14 years ago, 15 years ago, but somehow they are not. And when we look at the libraries, we were very surprised. And another problem is we find out that it's very hard to write good crypto implementation guidelines for smart, even smart software engineers. They don't have the necessary background to basically implement crypto correctly. And getting crypto correctly requires digesting their case worth of academic papers. And of course, we, security engineer, can review the libraries manually ourselves. But that is not scalable because there are lots of libraries out there. Another major problem we have is we find out that sometimes we file a bug. We fix them in our internal copy, and then we report it upstream, get them fixed in upstream. And then only to see them coming back when we upgrade our internal version to a new version of the external library. And regression like that really, really hurt because we can't just go back and review every single changes through the libraries. So there should be a better approach. And with those observations, we recognize that basically software engineers, they prevent and fix bugs by using unit testing. And we think that most many crypto issues can basically resolve by the same means. So these observations have prompted us to develop project wage proof, which is basically a set of open source unit tests that the test check the crypto libraries for known problems or some expected behavior. And so far, we have more than 80 unit tests. And when we run the test against the libraries that we use, we file like more than 40 bugs. We have tests for electrical crypto as a Diffie-Hulman authentication encryption or a big integer re-method. Most of the tests are defense in depth. Basically, we want to make sure that the libraries expect in the right way. Sometimes we don't check rather than just checking for explorability. We just want to make sure that, for example, default values are reasonable. For example, today when you run the test, the test we consider is not OK for a library to generate us a key with a default value of 10, 24 bits or using a digital signature with using someone as default. And there are many, many more tests where we check that the library actually are not vulnerable to, for example, timing side channel. We also release out of the box test runners for the libraries that we use, like Pasiccasual, Spongycasual, and OpenJDK. Our goal is to make it very easy for the user to run a command line and to be able to test the latest version or some particular version of the libraries that we support. So any questions so far? So in the next two slides, I will show you some of the cool bugs that we found. The first one is the key recovery in OpenJDK's DSA's implementation. So I think this bug was fixed in April last year. The problem here is the DSA implementation actually allows, if you look at the red line, the sign object is used with a private key with 2048 bits. The problem is if you initialize the signature object like this, OpenJDK will always generate the nodes using 160 bits only. That means the nodes is heavily biased. That means you can easily recover using latex techniques the private key just by observing three to five signatures. And this is a critical problem because DSA is still used a lot in many other places. Any questions about this bug? The second bug is also a key recovery in Pasiccasual, elliptical, Diffie Hermann. The C in the last C stands for co-factor. Basically, when they compute a shared secret, they multiply with the co-factor. And if you look at the red line, the way they compute the shared secret is they take the private key, which is key.getD, which is the private key of the receiver. And they reduce the order of the public key, which is under the attacker control. That means we can change the attacker. The X509 standard allows the attacker to change the order of the public key. So we can change the order of the public key and to binary search for the private key because we can tell whether the private key is larger than the order. And then we can reduce the change the order to finally nail down the range of the private key. And it just took, I don't know, a few requests, like maybe 100 requests to recover the private key. So these are the two, I think, coolest bugs that we found. But there are so many, many other bugs. You can look at our project on GitHub. And I think in last week, Pasiccasual just released the latest version. And they fixed, like, eight or nine bugs that we found and reported to them. So working on Way to Proof allows us to basically to understand what text, like, how a good Ripple library should look like. And one of the things that we've observed is there are not a lot of good Ripple libraries out there. For example, we have Ripple libraries that ask users for Ripple input. Like, if you want to encrypt something, you have to pass in a random number generator. And it's impossible to test those libraries because even though if we test the library, the user can easily screw up by passing in an in SQL random number generator. We also find out that most of the libraries out there don't actually allow user to switch the algorithms easily. So if you switch the algorithm, you have to start from scratch. And the cybertext encrypted with the old algorithms won't be compatible with the new software. Another property that we want to see in Ripple libraries is it should be easy to look at the code and understand right away the cryptocurrencies guaranteed by the libraries. Like for a Ripple, if you are using authenticated encryption, and it should be very easy to look at the code and understand right away without having to navigate the whole structure of the libraries to go down the obstruction layer to understand what's in the properties guaranteed by the library. Another problem that we saw is we would like to have to see the crypto community to develop a common crypto interfaces for C++, Python, Go, or JavaScript. We have something for Java, which is the Java photography architecture. But we don't have anything for these languages. And it's very hard for us to write tests without having a common crypto interfaces. We actually, right now what we do is we try to translate most of our unit tests into text vectors, into raw text vectors so that people can easily port them to all the languages or platforms. But it would be nice to have one single set of just to write the test once, and we can run it everywhere. Here's a few useful links that you can look up on the source code documentation have been released on GitHub. We also have a mailing list where we take request support. If you want to test something, or if you want to run the test against some particular libraries that we don't support, please email us. And we will take a look and see if we can support you. The mailing list also is the place where we discuss the new updates the major test that we are going to release in the next few months. So we are actively working on adding more tests. And not only for primitives, we also want to test like protocols like SSL or JSON web encryption. We look at those libraries and we have our last problems. And those libraries are getting very, very popular these days. So we want to make sure that we can test them and making sure that we can help the developers to fix the issue. Most of the tests that we release, we have been working with Pizzicacho, OpenJDK to integrate these tests into their CI system so that they can avoid regression. I think that's it for my presentation. This is the people working on the library or the project. All of us are from Google security team. But we welcome external contribution. I think so far, we have got at least a few PR requests from external people. So please keep it coming. Keep them, like, send us anything that you want to improve in the project. Yeah, that's it. Thank you. I think we still have five minutes for questions and answer. If you have any comments or anything that you want to test. And I want to have a commercial break. My team is hiring. And if you're really interested in building and breaking crypto using products that use my billions of people, please come and talk to me or send your resume to my email address. Thank you. OK, so we do have some time for questions. And I see somebody at the mic, so we'll take it away. Hello, interesting talk. I'm Yaoqi from National University of Singapore. So one quick question. So suppose the open source library, like OpenSSL, fix a bug with some patches. How long does Google take to update the software on the user's client side? You mean, like, how long we update the internal copy? No, not internal copy. Like, deliver the software to all the client side users. I'm not sure I'm the right person to answer this question. I think probably Adam and Emilia are the right person to answer this question. But basically, I think we try to push the updates to the client user, like to our user, before we release the information on the vulnerabilities or the bugs. So if you see a vulnerability, relays, or adversary, usually the bugs have been fixed in the user machines already. I see. So there is a very short window for the attacker to attack your software? I'm not sure it's the case, because sometimes the time it takes for, like, it takes a very short amount of time to fix the vulnerability. But the time it takes from the people who find the vulnerability to report it to us, maybe during that window, somebody else have got, like, their hands on the knowledge of the vulnerability. But it should be very, very quickly. And the windows, and we do our best to make sure that the window is very small, like, maybe a few days. Thank you. Thanks very much for doing this work, because it looks really interesting. Thank you. I had a question about, you had a bullet point there about switching algorithms by applications. And I'm wondering if you can kind of expand on that a little bit and talk about what exactly the problem is that you're trying to solve with that and what that exactly means. Yeah, it's just an observation. Like, when we look at the crypto libraries, you know, Ubisoft crypto libraries, we found out that, like, most libraries don't allow users to switch the algorithms easily. Like, for example, if you have a bunch of data, you encrypt the data, and you send, like, for example, you encrypt some URL parameters, and you send those parameters around the world, and they are embedded in links that are publicly. You can't change the cybertext anymore. And now you find out that, oh, the algorithm that you are using may not be very strong, and you want to switch to something else. The problem is, when you want to do that, most of the time, you can't decrypt the old cybertext anymore. And that's a big problem, right? Because you want to be able to be able to still, like, decrypt your own cybertext, your current cybertext, without having to, you know, to, I don't know, like, to completely rewrite your application. So most of the time, I think it boils down to, basically, you have to making sure that the cybertext contains some metadata about a key that you use to encrypt, you know, to generate the cybertext. And there are some, like, techniques that you can use to, like, very simple idea, like, you can embed a key ID or something into the cybertext. But some standards actually, instead of embedding a key ID, they do, like, things like, they embed a key ID, as well as some unwritten parameters about the, like, that are used to generate the cybertext, like, key ID, unwritten, and, like, for example, if you look at JSON web encryption standards, they have key ID, the unwritten that we use, and then some other parameters. And I think that is wrong, because you shouldn't do that, because you just specify the key ID. And the key ID should be bound to all of those properties. So when you look at the key ID, you have all the information in your configuration to, like, decrypt or verify the cybertext. Right, so basically what you're saying there is that, you know, because most crypto designers, protocols, and APIs are looking to prevent downgrade attacks, for example, so they actually limit options deliberately. So it sounds to me like you're talking about techniques that both avoid these kind of attacks. But it also provides this ability to switch algorithms. Okay, thank you. Okay, is that a quick question? Yes. Okay. So on slide two, I noticed something very interesting. The APIs are providing to developers, implement very basic things like encrypting, authenticating encryption, and yet you use libraries written for these purposes, and apparently the APIs aren't suitable at all. Sorry? Apparently the APIs those libraries provide can't be exposed for this purpose. So what's the benefit from using those libraries? Why should there be a plurality of libraries that all don't work instead of one that does work and provides the right algorithm so you don't make this mistake of using the wrong algorithm? With an API, it's easy to use. So if you look at crypto libraries, there are, I think, several levels. Like low-level libraries like OpenSSL, OpenTDK, BicyCuture, they provide us the primitives, like AES, Ellipticuff, Diffie-Hellman, all of those primitives. But if we let the programmers to use those libraries directly, usually they use wrong. So what we want to do is we want to give them a kind of a mid-level library that abstract away the common crypto operation that we see that are deployed in many, many use cases. So in those applications, those common crypto operations, usually it's just four operations, like authentication encryption, MAC, hybrid encryption, and digital signature. There are other things like deterministic encryption, but they are less popular than those four primitives. So what we do is we develop those, you know, primitives out of those interface and provide our programmers, our developers, those interface. Of course, we don't let them to use those interface directly either. In order to use them, they have to talk to us because most of the time, we want to make sure that they use the right primitive for the right application. So does that answer your question? Thank you. We're already running slightly pain times, Rich. So I suggest any further questions we take them to the break. Well, let's thank Tai one more time. Thank you, Tai. Thank you.