 So, welcome to our presentation regarding the security analysis of C-PACE. It's joint work with Michel Abdelha and Julia Hesse. My name is Björn Haase. So, this talk is about password authenticated key exchange, so about establishing a high-entropy session key by use of a low-entropy password. There are many protocols in the literature already. There's a common challenge for efficient designs. And basically, the efficient designs required idealized assumptions, such as ideal cypher or random oracles, we have recently seen implementation attacks. And in the past, at least in the past, protocol design and actual application was hampered by patterns. To give an example where clearly understanding the assumptions is important is, for instance, the encrypted key exchange protocol from Bellowen and Merritt in 1992, where the proof required an ideal cypher for encrypting group elements. And actually, this assumption was at the core of the difficulties for actually implementing this protocol securely. An example for the implementation attacks is the recent exploit by van Hove and Ronan on a PAKE protocol which has been incorporated in WPA3 and EAP, where at this time, a non-constant time mapping algorithm, hunt and pack, has been exploited. And regarding protocol design, there have been quite a number of protocols which have as main feature that they circumvented the patterns that existed at the time. And the complexity made it more computationally complex. But also prevented security analysis. So, and as a result, we designed at the time, we designed CPACE, mainly as a protocol which aimed at targeting constrained devices with little memory consumption, low computational complexity. And we also aimed at considering implementation pitfalls. As a side aspect, we also aimed at making it annoying for adversaries that have access to a quantum computer. So, since two years, CPACE is recommended for using IATF protocols as a result of the CFRG PAKE selection process. So that you have an idea on how CPACE, which target platform has been considered for CPACE. Initially, we designed it for wireless application for an industrial explosion protected sensor, where explosion protected is realized by reducing the power, which is available for the sensor for operation. So we have about 1.5 milliwatt power budget for the wireless operation and security. And we have really tight memory constraints for the application of the security. And so we really aimed at having a PAKE protocol which is as tiny as small as possible. So how does PCPACE work? PCPACE is derived from an older protocol, SPEAK, designed by Javelin in 1997. And SPEAK itself, it's based on Diffie-Hallman-Key exchange. So let's revisit Diffie-Hallman-Key exchange in the CPACE notation. So Diffie-Hallman-Key exchange starts with both sides agreeing on a common generator of a group. Both sides sample secret scalars as a private ephemeral key. Then we calculate the public keys from the private keys, exchange the public keys, we verify the inputs and derive the shared secrets. This is Diffie-Hallman and CPACE is exactly just like Diffie-Hallman with the only difference that we calculate the generator from the password and the party identifiers. So we have this additional sub step in CPACE. So calculating the generator is the main step in CPACE, which is the method that we use for calculating it. So it would be ideal if we had a random oracle that hashes a string and outputs directly outputs as a group element. And so a hash function hg. But unfortunately, it's unclear on how to construct a random oracle. And so this is not available in practice. And what we are using in real world designs is, first, we know that CPACE is meant to be used on elliptic curve groups. So the public key is consisting of coordinates of a point on the curve group. And each coordinate of the curve of the point is encoded as a field element. And now we use the fact that there exist mapping algorithms which take a field element and output a group element. So in order to calculate the generator, we have a first step where we sample, where we hash the inputs, the party identifiers, in order to concatenate them together with the password and obtain a field element as an output. And then we map the field element to a group element using the mapping primitive. So what is specific for CPACE in the real world? We would like to use curves of non-prime order for efficiency. We would like to use single-coordinated field element protocols. We would like to drop the checks for invalid curve attacks. And drop checks for group membership by relying on twist security, of course. And we would also like to allow for non-uniform sampling of scalars. And we would like to choose different mapping primitives, depending on the curve group that we are working on. So instance, we would like to use alligator two for Montgomery curves or Edwards curves. And we would like to use eCAD's map or SWU for short bias stress curves. And finally, we would like to use map ones and map twice constructions. So, and this work of our Azure contribution is all about the question, are these variants actually secure? There are previous results by Abdullah and Babuza, which have analyzed CPACE. They required a random oracle in the analysis directly hashing to the group. They required a modification of the protocol with a pass within the final hash. They mandated prime order groups. And their proof did not consider the use of single coordinate public keys. So essentially, the most important weak point in the previous analysis has been the idealization of the hashing to the curve group. So, and finally, the required properties of the mapping algorithm remained unclear. And this has been in the CFRG selection process, for instance. This has been used as an argument for promoting pay constructions that are way more complex than CPACE. So what are our results? First result is we were able to show that we don't need to assume a random oracle for hashing to the group. And we were able to derive the exact requirements for the mapping primitives that maps the field element to the group element. And we coined this property probabilistic invertibility. So essentially, if we have a group element, we need to be able to calculate all pre-element images in FQ of a point. And secondly, we need to be able to give a maximum bound of the number of pre-emages of all points on the group. And with these properties, we can construct an inverse mapping algorithm and use this inverse mapping algorithm in the simulator in the UC framework. And luckily, all of these properties are fulfilled by all of the currently discussed mapping algorithms. For instance, all of the mapping algorithms considered in the hash to curve draft at CFRG. Finally, the proof works for both map ones algorithms and map twice constructions. Second result is we were able to overcome a typical issue for Diffie-Haumann type protocols. Typically, you're facing the commitment problem when looking at adaptive security. You need to simulate, first, the public keys of honest parties without knowing the secrets. And after you have learned the secrets after the corruption, you need to provide a consistent picture. And typically, that's impossible. In C-PACE, we were able to do that by using the probabilistic invertibility properties of the map. And this allowed us to generate a trap door which gave us access to the secret exponents of the generators in the simulator. And this was what made it possible to provide a consistent picture after the corruption. Finally, in our analysis, we were able to show that different various other implementation aspects don't impact security. For instance, we were able to show that groups of non-prime order have no impact on security. We were able to show that C-PACE on groups is secure and C-PACE on groups' modulo negation is also secure. So single-coordinate scalar multiplication, such as X25519, can be used securely. And finally, we formalized this trust security notion for elliptic curve groups. So uncoinded twists, computational deployment problem. So as a result, we were able to show that point verification can be dropped when implementing C-PACE using single-coordinate scalar multiplication on twist-secure curves, for instance, using X448 or X25519. As a side effect, we found have established this alternative approach for carrying out security proofs. Conventionally, in simulation-based proofs, you're having simulation of the experiment and reduction separated. So you first have a simulator algorithm, which you show to be indistinguishable between ideal world and its real world. And the simulator has a set of bad events where you couldn't carry out the simulation and this algorithm aborts. And in a second step, you start with a reduction argument where you embed challenges of a hard problem into the protocol flow and show that the bad events coincide with events where you are able to solve the hard problem with the help of the adversary. So it's a two-step approach with reduction and simulation separated. In our proof approach, we have merged simulation and reduction. So we did this by embedding the assumptions as part of the simulator code by using assumption libraries. So let's first recall, if you have a cryptographic assumption, the assumption is fully specified by its corresponding experiment algorithm. So the experiment will generate a random challenge and provide all of the oracles that you are making available for the adversary. And finally, the experiment will check the adversary's output for the correct solution. And if you're having a falsifiable assumption, you are always able to design an efficient experiment algorithm. And then you can define and use the assumption in its form by the representation of the experiment algorithm. And that's what we did. So have a look, for instance, at a Python-like code for an assumption, the strong computational defi-helmet problem. So the strong computational defi-helmet problem is a standard computational defi-helmet problem where you have a challenge consisting of three generators. And the experiment library will generate such a problem instance by sampling a generator and sampling two additional points so that you're having a computational defi-helmet problem. The strong computational defi-helmet problem is differing by the normal computational defi-helmet problem by giving the adversary access to a restricted DDH oracle, where two inputs are focused. And by fixing two inputs, we are able to give an efficient implementation of this decisional defi-helmet problem oracle. So the simulation-based proof strategy consists now in the following. We write a simulator that embeds the assumption library objects. We embed the challenges produced by the assumption library objects in the simulated protocol execution. And we write the main simulator's code such that it never aborts itself. So there's no bad event in the simulator code. The only permissible abort conditions are abort in the assumption libraries. For instance, here if we are having the bad event and the abort case in the assumption library, where we see that the experiment object is provided a solution of the computational defi-helmet problem. And in this case, the simulator aborts. And now we see that the bad events exactly coincide with events where the correct solution for the challenge problem is provided. So this gives us the advantage that we can use the exact same simulator code body and only replace the assumption libraries for studying protocol variants. For instance, we can replace the conventional strong computational defi-helmet assumption library with a library for the strong twist secure computational defi-helmet assumption. And by another effect, we have the property that the reduction strategy is clearly visible in the executable code of the simulator. So the reduction is not buried deeply in some arguments in the text body, but clearly seeable and noticeable in the simulator code. So let's come up to the conclusion. As a result of our analysis, C-PACE is a fast resource-optimized paid protocol. It adjoys composability and a strong adaptive adversary models. We have been able to show that various variants and tweaks that were made for resource constraint device thus don't impair security. As a second defect, we formalized the reduction arguments by embedding the assumption libraries in the simulator code. And we think that this assumption library techniques will work whenever the assumptions are falsifiable. Here are the links to the current version of the internet draft. And here's the link to the full paper. Thank you very much.