 So, welcome to the session on post-quantum cryptography. We have three talks. And the first talk has a large list of authors, Mingxing Chen, Andreas Hulsing, Yust, Reinervelt, Timonus, and Peter Shavaba. And Yust, Reinervelt is going to give the talk. OK. So I'm going to talk about building MQ-based signatures from five-fast MQ-based identification schemes. Yeah, let's just start. Why do we want to do this? So the problem is we want to have signatures in a post-quantum setting. The current signature schemes are not going to cut it when quantum computers arise. And we want to have a secure signature scheme. Basically, we want two things. We want security arguments for these schemes so that we know that we can trust them. And we want them to be acceptably fast and reasonably small in terms of signatures and key sizes. You can debate about what is acceptable here, but let's just assume we have some notion of what is it. For now, let's just assume we have acceptable as a term. There's a bunch of people that have already come up with solutions. There's letters-based. There are some schemes based on MQ. They all have upside to downside. We're going to focus on MQ here. And for MQ, the current situation is that the stuff that's out there and the security is not very clear. Many of them have been broken. There are some that still stand. And we're going to try and base the signature scheme on MQ that is a bit more conservative. But before we get to that, let's first go for the general construction. And an overview of what we're going to do in this work. First, we're going to, in general, transform five-pass identification schemes into signature schemes. To do this, we extend the Fiat-Sumir transformation. This is typically traditionally defined for classical IDSs. I'll talk about it later in a bit. Then we'll show that an earlier attempt to do exactly this did not suffice. And then we're going to look at a specific application of this and then look at an MQ-based signature scheme that you can obtain by using this transformation. Signature scheme bases its hardness on MQ problem, which I'll also introduce in a bit. And then we instantiate and implement and show how fast this is and how big it is. There's some footnotes. There's stuff we don't do. The reduction is in the random model and not in the Q-ROM. The proof is also non-tight. There is a reduction, but yeah. Let's get started with some preliminaries. Canonical identification scheme, so this is the typical three-pass identification scheme. The prover commits to some random value using the secret key. Sends these commits over to the verifier. He comes up with challenges. The prover computes responses based on the challenges that he gets and sends these over and verifier checks if these responses match what he would expect based on the commits and the challenges, or the commit and the challenges. This is the traditional setting. And then what do we require of this identification scheme? We require it to be passively secure and we define security in terms of soundness which means the probability that the tertiary can convince a verifier should be small, so only a real prover that knows the secret key can convince a verifier. And we also want it to be honest verifier zero knowledge so that basically the verifier could simulate the entire transcript showing that he does not know the secrets and he's able to compute one of these conversations without actually knowing the secret. If you go from the bottom up, this is a convincing way to show that your secret is not actually leaking. So for soundness, there is some chance that the tertiary can guess right. We define this with the soundness error kappa. This should be small enough so that, yeah, this is a negligible, I understand, but for one round of the IDS, this would typically be like a half or two thirds, something along those lines, but we'll come back to that. So how do we turn this into signatures? You'll apply the Fiat-Schmier transform, but before doing that, we first need to get rid of the soundness error. I just mentioned it. Typically, this is the number of the order of half or two thirds, and by composing a lot of instances of this IDS in parallel, we can reduce this because basically you're multiplying the soundness error for each parallel composition that you're doing. So you compose until you get to a negligible level of error in your security parameter k. And then we transform this into signatures. The end result is non-interactive. So before we had this two-party thing where you had a prover and a verifier. Now we just want to have a prover that computes the signature. So the signer is the prover, and he uses some hash function to, or some function h2 compute challenges, which he then responds to, and this conversation ends up being the signature. And we want to generalize this because this is the classical three-pass setting, but by scaling it up to five-pass, you can benefit from a lower soundness error. Typically, the three-pass schemes would have a higher soundness error, and then you would need to have more in composition, but generalizing this to a different setting could make for a more efficient scheme that has less rounds. So a smaller transcript and easier to compute. But then the current Fiat-Chemere transform works on like the canonical ideas that I just showed. So that's where we're aiming at now. Before there was an earlier attempt to do this, 2012, AfricaCrypt, a paper appeared that defined end soundness to apply Fiat-Chemere, transform any two end-plus-one-pass signature scheme. Basically what they did, they said, if you have two transcripts that agree up to the last challenge and then you can extract the secret key, then you have this property called end soundness and then the transform applied. We show that this actually does not gain you anything because all the schemes where this would apply can be transformed back to three-pass. So you can restructure such an ideas. Basically what we're doing is we show that if you have some three-pass ideas, you can prove honest verifiers of knowledge by taking, for example, if you're going from a five-pass, you would combine the first three messages into one, then get a three-pass scheme. And for special soundness, you would show that you can extract from some three-pass scheme using the extractor that this five-pass scheme would provide. Basically you're proving an equivalent three-pass scheme based on an existing five-pass scheme that has this property. Which would be great because then you could apply the traditional Fiat-Chemia transform to this resulting three-pass scheme. Only none of the existing five-pass or seven-pass schemes, whatever pass schemes apply because they don't satisfy this condition. There's no such extractor that from two of these transcripts you can actually get the secret key. Then the authors ended up fixing it in the journal version of this paper this year. And they redefined and now it doesn't reduce the three-pass anymore, but still it does not apply to existing schemes. So that's the gap where our work comes in. What we're doing is we're doing a Fiat-Chemia transform for specifically five-pass as opposed to N plus one-pass and we're restricting it to a very specific instance. So we're looking at schemes that have to form the first challenge. So if you're doing five-pass, you typically have two of these challenge phases. The first challenge phase, you take a challenge from some queue so that is a parameter you define and the second one is a binary challenge. So it's either a zero or one. That restricts the setting we're in. So it's not a general five-pass scheme, but one that confines to these limitations. And then we're gonna prove that this is CMA using an unfortunate, anyway, we're gonna prove this using a dedicated instance of the forking lemma specifically tailored to this Q2 IDS constraint. Basically, we assume a successful forgery. Then we generate four signatures following a pattern on these specific challenges. You would have, for the first challenge, you would have one that agrees and one that disagrees and then for the second one, also one that agrees and disagrees. This is a very specific pattern for these that follows from the fact that this is a binary second challenge. And then we can obtain four of these traces and use an extractor to show that this works. This would apply to IDSes that follow this pattern. Now, before going into the very specific instance that we're applying this to, because this looks like we have some instance in mind where this condition holds, let's first look at the context of this, the hardness problem that we're basing this on. That's the multivariate quadratics, the MQ problem, and it's defined as follows. We take a function family MQ over some M and FQ, and all of these are polynomials that are quadrat, so basically they consist of quadratics terms and linear terms, and they have coefficients. Together all these coefficients make up an instance of such a problem, so basically the problem is given some output of this function family. For some input, you need to find the preimage x that went into it, so basically you're solving a system of equations where you have coefficients a and b and you input the x's and then get the y's out, then going back, if you have this y factor, you cannot easily find the x's. That's in a nutshell the hard problem that's behind this, and then we're looking at an identification scheme that uses this. Now I'm not expecting you to read this protocol in detail, but just to get a general idea of what this IDS would look like. Well, it looks like this five pass setting that we're looking for, it has the first challenge alpha comes from some FQ, and then the second challenge CH2 comes from the set zero one, so that's a binary challenge. First the prover commits to some randomly chosen vectors from FQN, commits to these, responds to the challenge alpha, there's an evaluation of F there, that's this MQ problem that we just discussed, and there's also G up there, which is a variant on this, I'll come back to that on the next slide. Basically this follows exactly the pattern that we just looked at, like the assumptions that the proof would hold for. So this is the scheme that we're gonna look at. So how does the scheme work in a bit more detail? It relies on just the MQ problem, typically MQ, a scheme based on the MQ problem would also have other related problems, which typically introduce the security problems that we've seen with other schemes based on this problem that would rely on isomorphism of polynomials or other related problems, and then people would claim that they are based on this MQ problem, but there's also related problems and these typically lead to a text. Well for this scheme, it's only this MQ problem that we're looking at, because that's the only thing we're evaluating here. And what Sakamoto et al did for this IDS, they showed that you can take this MQ problem and split it into parts, split the secret into parts and then using some baleenier function that won't go into detail here, but this allows them to split it in such a way that you can either reveal these three factors or the other three factors, which basically give you the responses of the prover here. And this shows that you can have some sort of proof where you're proving knowledge of either one half of the secret without revealing the other half, because this split is random and does not allow you to compute the other half of this secret S. I won't go into details on this any further, but that's the general idea. So what do we do with this? Basically now we build a signature scheme out of this by applying the Fiat-Chimia transform we saw earlier. We start by sampling some seeds and picking a random secret key value that serves as the X for our problem F. Basically we had this X of input elements that would go into capital F to evaluate to get to some Y. Here SK is the X and PK is the Y. And that instance is basically our key. And then we take some C to sample F from because F is this large set of all the coefficients and it will be very impractical to have this as part of your key. So you would typically just have a C to expand this from. You could also have this as a system parameter, but yeah, that's not necessarily a reason to do that instead of just having it as part of your key. So what does signing look like? We signed some random digest D over M so we hash it and then use this as an input. BitString. And then we perform our rounds of this IDS. Basically we're doing this VH-Chimia transform where we first parallelized this IDS. We've made a composition of our instances of the identification scheme. And this consists of doing two R commits because you saw earlier that there's two R or two commits per instance of this. There's also two R evaluations of this MQ function because there's F and there's G up there. I'm not sure how legible this is, but people in the back will just have to believe me on this. So there's two R commits, some multiplications in FQ for this challenge alpha and then two R MQ evaluations. And in terms of how much effort this is, like where does the computation go? These commits are just applications of hash functions or some string bit commitment, but these two R evaluations of MQ, that's the costly part. And then of course there's some size so we can play a few tricks to reduce the signature size. That's typically the bottleneck here. You don't want to have a very large signature and you want to somehow limit this to as much as you can. And you can do this in a couple ways. One of the few of the tricks we pull is you only include the necessary commits. Basically the verifier reconstructs one of the two commits. And the other one you would need to supply but the one that the verifier is reconstructing anyway, you wouldn't need to make part of the signature. And you could commit the seeds that's instead of committing to like full sets of random data, you could commit to some seed that produces this random data and then later reveal the seed for R parameters that last trick didn't apply but in general it could work. And then how does verifying work? Basically bottom up of the same story. You reconstruct the digest and this system parameter F because the seed is part of the public key, anyone can just reconstruct F. You reconstruct the challenges from whatever is part of the signature because the prover used the message and is commits to generate the challenges. You verify the responses. You reconstruct the missing commits that we omitted by only including the necessary ones. And then you check if the commits that were omitted actually match whatever checksum or hash or whatever was included to account for like leaving out half of these. And then there's a bunch of parameters that we leave out that we've omitted to give instances of so far. So there's K, the security parameter and there's N and M, the dimensions of our MQ problem where N is the number of inputs and M is the number of outputs. We're doing this over some finite field FQ. There's a commit function, there's a bunch of hash functions and there's the number generators. So let's now look at a specific instance of this. So MQ DSS 3164 is what we define as the instance in our work for some security parameter KS 256. This results in the general post-mono security of 128 bit. And then we have some soundness or a kappa that depends on Q so that makes it important to choose a Q that is not two. Typically people would choose Q as two for one of these problems but then if you increase Q, kappa declines so you would need to have less of these composed IDSs in parallel which makes it interesting to choose a slightly larger Q. So we go for Q is 31 and N and M are 64. These are restricted by attacks that would be feasible if you have lower N and M or you would have them vary because you could have a slightly larger N and a slightly smaller M for example or the other way around. But they're also an artifact of wanting to choose nice parameters for code. Come back to that briefly in a second. And then there's a bunch of functions that you would need. We have some commitments, some hashes, some sort of number generators. All of this is not really much of a factor in terms of how fast the scheme becomes. So we use the reasonable choice of just going with catcher functions here. So to summarize, the signature then contains some R to have the randomized message digest, some hash over these commits that we're leaving out and then for each of the rounds there's gonna be 269 of those. We have these response factors T, E and R and one of the commits that you would need to include that the verifier cannot reconstruct so we would need to include half of them which comes down to roughly 40 kilobytes of signature size. So a slightly more detailed look at how this MQ problem works. So going from FX to X should be hard but from X to FX is something we do a bunch of times during the evaluation of this during the creation of the signature. So basically evaluating this FX and that should be easy. And when I say easy, I mean fast. Basically look at the MQ problem as a triangle. You have a bunch of axis on the X axis, a bunch of axis on the Y axis. What we're trying to do is we're multiplying each of those to create the quadratic terms and then you have a dimension basically for each of the coefficients A that you would have for every element in the output factor where you're basically evaluating one triangle for a pair of elements in the output. Which are triangles are not that great to work with so what we're trying to do is go to a rectangle and that's also where the parameters come in because then we want to have a rectangle that fits nicely into registers. And then we look at, we earlier discussed we had Anonam is 64 and then we have elements in F31 and these elements are nicely represented in five bits and you can place them into slightly larger spaces or you either put them in eight or 16 bits and then have some space to do your computations. So this all fits nicely in the architecture we're targeting here. That was a reason to also come up with these parameters. To give you some conclusions and benchmarks the signatures are 40 K which is roughly what Sphinx is also getting. Sphinx is the current state of the art hash-based signature scheme which is the most conservative signature choice you could seem to make right now. There's probably a private keys of 72 and 64 bytes so those are very small though it's a result of choosing just a seed and this input to the MQ function. Signing time is not comparable to letters-based signatures but definitely faster than what Sphinx is getting. Purification and key-gen are similarly reasonably okay but more importantly or also importantly we now have a general FIHMU transform for IDS's of the form Q2. Yeah and we present a signature scheme that actually does competitive signatures with a reduction to just the MQ problem. That's it. Code is publicly available and public domain and that's it. Questions? I'll ask a quick question then. So I was interested in whether you can do something five-pass that would make code-based signatures more efficient. Anything about that? Yeah, so actually in the journal version of the paper I referenced earlier where I had this updated version they also present the code-based signature scheme and while their reduction does not immediately apply ours would apply because it's of the same form. It's a Q2 idea so we typically what we're doing here would directly apply. I'm not sure about what the formers would be like or what size it would be like. Yeah, it would definitely work. This is why we're not accepting code. So please send us the slide before the session or you can put your slide on the computer of the conference before the session starts. I did try before the session. Any questions? The problem is that the conference computer does not have PowerPoint so. No, no, no. We did. No, you don't. You have WPS office. What? I can't, what? The laptop that was here before didn't have PowerPoint. No, you don't. I mean, you didn't earlier when I tried. That's WPS office. But that's not the, yeah, just put it there. Yeah, I know that. There I can see the time. It's my clock. Yeah, so. Yeah, got it. Okay, my apologies for the delay. Some of the appearance of text will be in the wrong order because this is not PowerPoint on this laptop, but it should be all understandable anyway. Okay, so I'm going to talk about collapse binding quantum commitments without random oracles. And if you don't know what collapse binding commitments are, that's not a problem. I will start by motivating them and telling you why we have that. So for this, I would first like to give you an example of something we could do and which is realistic to do. So let's say we have a horse race. That's just a use case example. And a player wants to, so we want to construct a commitment in order to allow a player to commit, to bet on a horse without telling the bookie beforehand what he committed to. So that's a kind of a typical teaching example for commitments. And how can we do that? Well, for example, we could take the name of the horse, Spice Spirit, in this example, and take a hash of that name together with some randomness and send this both together to the bookie. And now let's say the Spice Spirit is the horse that wins. What does the player do? He sends the randomness to the bookie, the bookie checks whether the name of the horse together with the randomness gives the right hash and if so, pays the money. So that's a typical approach that we might do. And now we can ask ourselves, is this a secure protocol or not? So consider a cheating player. Could the cheating player, so we are looking at the binding property now, the hiding property is not the topic of this talk, so could a cheating player achieve the following? So instead of actually sending the hash of some horse name, he just sends some fake value H that he made up in whichever way he likes and sends this value H to the bookie. And later when some other horse wins, say Walloping Waldo, the player performs some algorithm to find an R so that the hash of Walloping Waldo and R equals the value H he sent earlier and the bookie sends money. So this would be a typical attack. And now I ask, is this possible? Well, if we do not specify anything about the hash function, so this is what I want to do, if we don't say anything, of course, could be possible. It could be that the hash function is the all zero function or something like this. But I guess everyone here, whoops, yeah, only this should have appeared. So I guess everyone here knows that in classical cryptography, all we need to do is to take a collision-resistant hash function here. Because collision resistance means that it is infeasible to find two different inputs to this function that have the same hash. And therefore, the player cannot find one input which contains Walloping Waldo and another input that contains Spicy Spirit that have the same hash. So the consequence is that H can be opened to one horse only and not two. And now comes the big surprise. In the quantum setting, so if the adversary has a, potentially has a quantum computer, this reasoning does not hold. So it could be that H is collision-resistant, even collision-resistant against quantum adversaries, but still it is possible that the player unveils to any horse he pleases. At least relative to some rather artificial oracles, such a hash function has been explicitly constructed in prior work. So the question, well, the first question you might ask is, is this even possible? Although this is not the topic of the paper here, I would like to tell you a word or two why it is possible in principle that this can happen, because otherwise you might stop listening because you think I'm solving a problem that can't happen. So why could it be that such an attack is possible with a collision-resistant hash function? Well, on a very high level, what could happen is that the player sends some fake value H, and this fake value H is produced with some quantum algorithm that does not compute only the value H, but also compute some quantum state psi. So it's a randomized quantum algorithm that outputs each time you run it, and H and the quantum state together, so you can't just first pick the H and then compute the quantum state, they just come together, and then later, when the player knows what horse he wants, there's some other algorithm that computes from psi the randomness needed to open to that value. And now since quantum states cannot be cloned in general, we see that there's no contradiction to the collision resistance, because this algorithm here will use up psi, we can do it only once, so we can open this hash function to whatever we want, or we cannot open it to two values at the same time, in particular, we cannot find a collision. And this has been explicitly done relative to some oracles, so it seems to be a threat that is at least possible. So the question is, also already done in this year, you're a critic, what do we do against this? How do we improve the commitment schemes in order to avoid what I call here the walloping-walder attack? And there were two contributions, one was the definition of a stronger property of computational binding, because it turns out if you just use one-to-one, the definition of computational binding that is used in the classical setting, even the definition does not exclude this attack, there's an improved definition called collapse binding. I will not show you that definition for time reasons, but let me tell you that it does imply that you cannot cheat in this example that I showed you. It is nice because it composes in parallel, and it is rewinding friendly, so even in proof that you use rewinding, which are particular tricky in the quantum setting, like when you use, for example, your knowledge proofs and so on, these commitments behave nicely. But the question is then, do they exist? Well, and that was the second contribution. Another notion was introduced, a security notion for hash functions, called collapsing hash functions, and it was shown that with simple, with kind of standard constructions from the classical world, a collapsing hash function implies a collapse binding commitment, so a commitment that is good for all purposes that we know. Collapsing is a strengthening of the notion of collision resistance, and I claim it is what we actually want from a hash function in a post quantum setting, and it was shown that such functions exist in the random oracle model. But the big question that we might open is do collapsing hash functions actually also exist in the standard model? Because you could claim, well, I make up some definition, and then I show the random oracle satisfies it, but perhaps it's an impossible to achieve definition. So the goal of the present paper is to show that collapsing hash functions exist in the standard model, so without random oracle. And in particular, this then also implies the existence of collapse binding commitments in the standard model. So it solves all the problems that I've mentioned so far. Not all the problems in the world. So that was a bad button, it's back. Okay, so yeah, it appeared too much again, but it's all right. So let me first tell you now what a collapsing hash function is. So I will now show you the definition. I'm showing you the definition here a bit differently than it is in the paper, because after the paper, I came up with a bit more intuitive way of phrasing it, but it can be easily seen to be equivalent. So consider a hash function, and now we want to generalize the idea of being collision resistant to the quantum setting. And in the classical setting, collision resistant means we cannot find two values that have the same hash at the same time. In the quantum setting, as we have seen, this is not enough. Instead, we will ask for something which at least on the vague intuitive level means we cannot find a superposition of two values that have the same hash. And this is formalized as follows. So we look at an adversary that outputs a number of messages M in superposition. There should be outputs here. So here we have an adversary, he outputs a bunch of messages M. And then we take this bunch of messages, the superposition of messages, and measure which one it is, and we give that the state after the measurement back to the adversary. So quantum mechanics tells us that if the adversary produces a bunch of messages here and we measure them, what we give him back is not a superposition of messages, but randomly chosen one out of the messages that were in this superposition. And now this we contrast with the second game where again the adversary produces the superposition of messages, but now instead of measuring which message he sends, we only measure the hash of the message. So we perform a measurement that takes less information about the quantum state. And that means that whatever the adversary gets back will be a superposition of several messages or potentially a superposition of several messages that all have the same hash that we have measured because we have narrowed the superposition down to all messages that have the same hash. And now the definition of collapsing is in principle very simple. It says that an adversary cannot tell whether he runs in this setting or whether he runs in this setting. What does this intuitively mean? Intuitively, if we ignore all the issues like computational limitations and so on and just think information theoretically, it means that measuring M and measuring the hash of M do the same thing to the state. And that means that the information content of measuring M and the information content of measuring hash of M is the same. In other words, measuring the hash tells you the same as measuring the message which basically means there is no collision. This was some information theoretical argument. Of course, a hash function will have collisions. They are just hard to find. But this should be enough for now to have a vague feeling about why the definition is like it is. Basically, the hash function is supposed to look as if it wasn't possible to make, to send a superposition of different messages with the same hash. Okay, so that's the definition. And now the question, do they exist? And I will sketch the construction that we found on a high level. And the main tool that we use for this construction are lossy functions. You may have heard of lossy trapdoor functions. Lossy functions is the same except that we don't need a trapdoor, so we slightly weaken the requirements. And a lossy function is a function keyed by some public parameter, which can exist in two different kinds. So depending on the parameter, the function is either injective or it is highly non-injective in the sense that the image of this function is concentrated on a rather small set of its range. And the lossy function definition says it is, you cannot distinguish between these two kinds of parameters. And this we will use. So the construction is this one. We take a message and the size of these blocks here represent how many bits the different inputs and outputs are so that you see where it expands, where it gets smaller. So we take the message and feed it through a lossy function. Now the bit length of the output of a lossy function is considerably longer than what we put in. So it wouldn't be a good hash function. But by definition of a lossy function, the lossy function always looks injective. No matter whether we use a lossy parameter or an injective parameter, it will always look like it's an injective function. And an injective function is easily seen to be a collapsing function because an injective function doesn't even have collisions. So in particular, you will not be able to have a superposition of collisions. So a lossy function is a collapsing function, but a collapsing function with a range that is bigger than the domain. So that's so far pointless. However, if you run the lossy function in the lossy mode, then although the range here is very big, the actual image that is used here will be a very small subset of this. So much smaller than this box hash in the end. So we have many bits, but only very few bits strings are actually possible. And now we use a universal hash function. And we can very easily show that if you have applied universal hash function to a set that is very small, then this universal hash function is with very high probability injective. So when the key here is lossy, then this hash function will be injective on the actual image of this lossy function, and therefore will be collapsing. So the concordination of these two functions will be collapsing, and the output will actually be shorter than the message. So now we have managed to construct a collapsing function that takes a message of some length and makes a hash of shorter lengths from it. Are we done now? Well, it depends. For some purposes that may be good enough, but generally we would like hash functions that can take a very, very long message and bring it to a short hash. While here, the more we want to compress, the stronger the assumption about the lossy function will become, so we would actually prefer to have a simple computational assumption, I mean as weak as possible assumptions, which means that ideally we will cut off only a few bits or perhaps half the size, but not make like a gigabyte into a kilobyte or something. So what we need to do is to hash long messages. I need the text here, it will come soon, yeah? Okay, so how do we hash long messages? Well, in classical crypto, there are many constructions that are well known, and in particular, for our purpose, the Merkel-Dumgaard construction turns out to work. So if the hash function, now I get rid of this here, so if the hash function h is collapsing, but takes two blocks into one block, then a construction like this, like we hash part of the first block of the message, get something, connect, concatenate with the second block, hash it, concatenate with the third hash it, and so on, with some suitable padding, this is a collapsing hash function. Why is that the case? Well, what we need to show is measuring the hash is equivalent to measuring everything that goes in, and I will sketch that, so assume we measure the hash here. This, because this hash function on its own is collapsing, is indistinguishable from also measuring the inputs to that function, and now this output of this function is measured, which will be indistinguished from measuring this and those inputs, and so on. This is a bit simplified because we can't do an induction over the length because it's dynamic, et cetera, but the basic intuition of what's happening is captured by this, and that means that measuring the hash of the function is indistinguishable from measuring the input, and we said that is what we mean by a collapsing hash function. So I have more results, but no time, so I skip them, and I only mentioned my main interesting question, the main interesting question is, can we construct collapsed binding commitments on even weaker assumptions, or can we do it perhaps with more than one function or with collision resistant hash functions because lossy trapdoor functions are already a relatively powerful tool, so we would like to ideally do it with one-way functions because classically we can do computationally binding statistically hiding commitments even using just one-way functions. So that's an open question. Thank you for your attention. Thank you very much for dealing with the technical issue, but still finishing on time, I'm impressed. So we have actually time for a quick question. Classical hash functions might give you problems in the quantum world, but so can you tell me whether SHA-2 or SHA-3 have any chances of exhibiting this weird behavior? Well, so since the random oracle on its own is collapsing, so if we assume that SHA-3 is like a random oracle or SHA-2, then we are on the safe side. Of course it's not clear whether this is true, but basically in the classical setting we make a similar thing. We say the compression function behaves ideally, and therefore it's also collision resistant. So I don't see any problems with those functions, especially since, for example, for SHA-2, the building block is kind of done by, I don't know, throwing the bits around heavily enough so that everything looks random, and then we build the full hash function using Merkle-Dumgaard, which has been shown secure here. So with SHA-3, I'm very confident that everything is fine with SHA-2. With SHA-3, we don't use the Merkle-Dumgaard, we use the sponge construction. They are also pretty confident, but that is based on unpublished proofs that I only have on my whiteboard so far. So the sponge construction is, yeah, so one needs to additionally analyze the sponge construction, which is a bit difficult, different, because it doesn't seem to be indifferential within the quantum model and so on. But if you trust scribbling on my whiteboard, then everything is fine there as well. Okay, great, thanks. All right, thanks Dominic for your talk. Thank you.