 Hi everyone. This talk will be recorded for crypto in 2020. Here we revisit the question of constructing lattice-based blind signatures. This is joint work with Ike Kiels, Julian Loss and Kahn Ian. A blind signature scheme is an interactive protocol between two parties, a signer and a user. The signer takes his input as a secret key, the user a public key as well as a message M. The key pair, PKSK, is output by some Keegan function and message M is an arbitrary bit string. In this work we consider the special case where the interaction consists of three moves. At the end of the interaction, the user learns a signature sigma on a message of its own choice and the signer is supposed to learn nothing. Further, a public verify algorithm V is defined which on input and message signature pair as well as a public key will either output, accept or reject. Blind signatures come with two security properties, one more unfortunately and blindness. In the blindness experiment, the adversary takes the role of a malicious signer. Intuitively, the adversary should be blind to which message was used in a specific interaction as well as the produced signature. In the one more inforgerability security experiment, also called OMUF, the adversary takes the role of a malicious user. The intuitive goal of the adversary is to forge a new signature on its own. In contrast to plain digital signatures, the existential inforgerability against chosen message attack security experiment is not applicable here. Since during the simulation of signatures to the adversary, the experiment does not learn the signed messages. Therefore, the experiment has to keep track of the number of interactions and thus the number of signatures the adversary may learn. In the end, the adversary has to produce a one more forgery. Blind signatures have various applications in privacy-related protocols such as e-voting, e-cash and anonymous credentials. Since our scheme can be instantiated by latest assumptions, it is post-quantumly secure. One can subdivide post-quantumly secure schemes into two distinct sets. On the one hand side, we have schemes from generic complexity assumptions and on the other hand side, efficient schemes with a concrete security treatment. The works listed here are representatives for a whole body of work. In the concrete, non-post-quantum setting, we have the seminal work of Ponchal and Stern, Journal of Cryptology 2000. They give the first proof of the Okamoto-Schnauer Blind Signature Scheme which is based on the discrete logarithm assumption. This assumption can be broken by a quantum computer. In the generic non-post-quantum setting, we have Jules, Luby and Ostrowski, Crypto 97. In their construction, they use one-way trapdoor permutations. It is not clear how to instantiate them from a post-quantumly secure assumption. In the intersection of generic and post-quantum, we have Fischlin, Crypto 2006. He gives the construction for blind signatures, which uses a pubtikine encryption scheme, a commitment scheme and a non-intensive serial knowledge proof, of which all three are instantiable from lattice assumptions. In the intersection of concrete and post-quantum, we have Rückert, Asia Crypt 2010. Rückert's work is the first concrete lattice-based construction for blind signatures. His construction is based on the hash function by Lubaszewski and with Ciancio. In this intersection, lots of related work builds upon the proof ideas of Rückert. Unfortunately, as we show in our first contribution, this initial work is flawed and thus all subsequent work also suffers from the same flaw. This clears the field for our second contribution, a construction for lattice-based blind signatures. Three move-blind signature schemes are susceptible to the so-called ROS attack against the OMUF security, whereas ROS stands a random in homogeneities in an over-determined solvable system of linear equations. The ROS attack can be seen as a specific attack against the OMUF security of three move-blind signature schemes. It works roughly as follows. The attacker takes the role of the user in the OMUF experiment. The first thing the attacker does is to request L-first messages, R1 to RL, thus opening L parallel sessions with the designer. Next, the attacker uses these L-values R1 to RL to compute a solution consisting of two parts. The first part is a vector of L-challenge values C1 to CL, which the attacker will send over to design individually. The attacker will now receive L-responses S1 to SL. From these L-responses S1 to SL, the ROS attacker finally computes L plus 1 signatures. The bigger the value L, the more information the attacker receives hence the easier the task of the adversary to break the scheme using the ROS attack. If L is too small, then the task of the ROS attacker becomes information theoretically hard. If the blind signature scheme is the one by Okamoto Schnorr, whose hardness is based on the discrete logarithm problem, then the ROS attack is actually feasible. In the work which recently appeared on E-print by Ben Hamuda, LePo, Hoare and Reikova, they show that if L gets large enough, then the ROS attack finds the solution in polynomial time. In the latter setting, the concrete hardness of the ROS problem for large L is not known. In Poinschall & Stearn's original proof for the Okamoto Schnorr blind signature scheme, the parameters are chosen such that the ROS attack is information theoretically hard. In the latter space blind signature scheme by Rückert, some parts of the proof techniques introduced by Poinschall & Stearn were reused. However, the proof does not take the possibility of the ROS attack into account and in particular does not set the parameters such that the ROS attack gets information theoretically hard. This roughly shows the problem in the security proof of Rückert's led to space blind signature scheme. The main technical ingredient in Poinschall & Stearn's proof to show that for small L the ROS attack is information theoretically infeasible is the forking lemma. It gives a lower bound on the success probability of executing an algorithm twice on partially dependent uniform randomness in the shared state. In 2000, Poinschall & Stearn introduced forking proofs for signatures and blind signatures. In 2006, Belach and Neven give a generalization. Their version only estimates the probability of both runs being successful. In the context of blind signatures, this is on its own not sufficient. For three move blind signatures, an additional argument on the distributions of the outputs is needed. In 2019, HAWK kills and loss give a further generalization which also allows to argue on the distributions of the outputs. Our first contribution is to point out flaws in the work of Rückert, in the case of Belach & Neven forking lemma leads to a flaw in the om of proof. Specifically, an argument on the output of the forked algorithm is missing. Thus, the security statement becomes uncertain. Yet, we do not give a concrete attack. Our second contribution is to point out flaws in the work of Rückert and we do not give a concrete attack. Our second contribution is a generalization of the modular framework presented in HKL19. They show how to construct blind signatures from a special type of hash function. We extend that framework to the case of correctness errors. Whereas HKL19 considered the error-free setting. In the letter setting, blind signature schemes come along with some kind of correctness error. We say, the scheme has correctness error delta. If for all messages M, the probability that the verify algorithm and output reject is bounded by delta. The random choices are drawn over an honest execution of the scheme. The framework of HKL19 defines the two following transformations. From a linear hash function, they construct a linear identification scheme. From that, they construct a linear blind signature scheme. In this work, we start with a linear hash function with noticeable enclossance error. An enclossance error is a notion closely related to correctness error. From that, we construct a linear identification scheme with negligible correctness error. Finally, from that, we construct a linear blind signature scheme also with negligible correctness error. A linear hash function is defined by three sets. A set of scalars S, a domain D and a range R. Where D and R are S modules. Let F be a map from the domain to the range. We require F to be a module homomorphism. Meaning that F preserves addition and scalar multiplication. Further, we require the kernel of F to be non-trivial, implying that F is compressing. Finally, F is required to be collision resistant. In the homomorph proof, we reduce the hardness of breaking the homomorph security to the hardness of finding collisions in F. For every variable defined in the scheme, we define a filter set. Which is a subset of either the set of scalars S or the domain D. From the filter sets, we require the enclossedness property. We say, a linear hash function is delta enclosed if the addition and scalar multiplication of certain variables lies in a specific filter set. This relates to the correctness error of the blind signature scheme. The better this probability, the smaller the correctness error. Further, we require smoothness from the filter sets. We say, a linear hash function is smooth if computations of certain variables are identically distributed to certain other variables drawn uniformly at random from some other set. This property is crucial for the blindness proof. The first transformation of our framework turns linear hash functions into linear identification schemes of correctness error. The Keegan algorithm samples a secret key uniformly at random from its respective filter set and sets the public key to be F of sk. The signer starts by sampling r uniformly at random from its respective filter set. Here, the index i denotes that the assignment is repeated for all values where i is defined. Next, the signer applies F to all values r in order to get a vector of values capital R. Then, the signer sends this vector to the user. The user now samples the challenge uniformly at random from its respective filter set and sends it over to the signer. The signer then calculates S as C scalar multiplied with sk plus r. Here, the circular arrow denotes that this operation is repeated until S is in its respective filter set. Without this check, value S may leak information about the secret key. With high probability, this check will be true for at least one value r. Thus, by choosing an appropriate dimension of vector r, we achieve negligible correctness error. The signer ends its execution by sending S over the channel. At the end, the user checks whether there exists some i such that F of S equals to C scalar multiplied by pk plus F of r. If this is true, the execution of the identification scheme is correct. The simplified idea in constructing the signature scheme is to set the signature to be the transcript. The verify algorithm of the blind signature scheme will then test whether the transcript is a result of a valid interaction. However, this contradicts blindness, since it's trivial to deduce from the signature which interaction it was produced in. So, we need to blind the transcript. Since the first component of the signature is an undiminential vector, this leaves us a huge signatures. To mitigate this, we will use hash trees. Hash trees, also called Merkel trees, were first used in the context of lattice-based plant signatures in the work Place Plus. A hash tree is a binary tree data structure where a list of values, here r prime 1 to r prime 4, are hashed together to get a single hash value, the root of the tree. The leaves of the binary tree are the hashed values. To get the hash value of the parent, both children note the hash together. This step is repeated until the root of the tree is reached. Anticipating the usage of hashed trees in the final blind signature scheme, we continue as follows. The first part of the signature consists of values such that a single value r prime and thus a leaf of the tree can be recomputed. The second part of the signature consists of the minimal number of nodes needed to be able to reconstruct the root. To generate the subset of the tree, an additional algorithm built off is introduced. For the sake of an example, let r prime 3 be the value recoverable from the first part of the signature. Thus, the authentication path consists of h4 and h5. Now, for the second part of our framework. Here, we turn linear identification schemes of correctness error into linear blind signature schemes of correctness error. The Keegan algorithm stays the same. The signer side stays also the same, so we can safely ignore that for now. But the user side changes significantly. Instead of drawing a challenge uniformly at random, the user proceeds as follows. First, the user draws two vectors of blinding parameters alpha and beta. Alpha will be used to blind the third part of the transcript, beta to blind the second. Vector r prime is calculated, such that exactly one value r prime can be reconstructed from the blinded second and third part of the transcript. Then, the hash tree is constructed. Next, the first part of the signature, c prime, is calculated as the hash of the root of the tree and message m. Value c prime now gets unblinded to get the second part of the transcript. The circular arrow denotes that this calculation is repeated until a j is found, such that c is in its respective filter set. This guarantees that no information about c prime is leaked by the second component of the transcript. After the signer returns s, the user performs the same test which already can be found at the end of the identification scheme. The value of variable i is necessary to identify the index of value r prime, which can be reconstructed from the signature. Next, the user calculates the second part of the signature s prime by blinding the third part of the transcript. The circular arrow denotes that the calculation is repeated until the value k is found, such that s prime is in its respective filter set. This check guarantees that no information about s prime is leaked by the third part of the transcript. Finally, the user computes the last part of the signature, an authentication path for exactly that value r prime, which can be reconstructed from c prime and s prime. The simplified verify algorithm will check whether c prime equals the hash of root and m. We formulate the following security statements. If a linear hash function is collision resistant, then the first transformation yields one more man in the middle secure identification scheme, where the latter security notion is an intermediate notion natural to linear identification schemes. Further, if a linear identification scheme fulfills this notion, then the second transformation yields an ohm of secure blind signature scheme. Additional, if a linear hash function is smooth, then a resulting blind signature scheme is blind. For our framework, we provide a sample instantiation based upon the hash functions of Lubaszewski and Michi Ancho. The underlying assumption is the hardness of the arses problem. Here, the size of the prime is approximately 2 to the power of 1890. Our signatures are roughly 36 megabyte, and the signing speed is probably quite slow due to the size of the prime. Further, the maximum number of signatures per issued key pair is polylogarithmic in a security parameter. This is due to the security exponentially degrading in the number of issued signatures. Only in this range, the arse problem is information theoretically hard. So, all in all, the practicability is quite low. To conclude, we give a modular framework for lattice-based blind signatures with negligible correctness error, some blindness and ohm of proofs. However, the instantiation is quite improbable. So, as open problems we see, a more efficient instantiation and also crypt analysis of the generalized arse problem. If there's something I want you to take away from this talk, it's the following. Take care with forking proofs for blind signatures. They turn out to be quite tricky, especially in the lattice setting. Thanks for your attention.