 Hi, I'm Yashwan, and I'll be speaking about special schnard with stateless deterministic signing from standard assumptions. This is joint work with Francois Gallio, Perman Mohassel, and Valery Nikolenko. So let's say Bob would like to send some bitcoin to his friend Alice. He first creates a message to that effect. He takes out his laptop in which he has a signing key, and he presses a button to produce a signature to attach to the message. However, an attacker could enter the picture and hack into Bob's laptop and steal his secret key. And she can essentially divert funds to herself as she has understated access to Bob's account. In order to mitigate this, Bob could employ a technique by which he splits his signing key into multiple fragments so that even if one fragment is compromised, the secret key stays safe. So this notion is achieved by threshold signature where the secret key can be split into multiple fragments and stored on different devices. And the devices must interact and collaboratively sign messages. The important thing is that the signature that comes out of the system looks just as though it came from an execution of the regular single-party signing algorithm. So the setting that we are going to consider in this work is that of dishonest majority. That's all devices but one are corrupt. But we're mostly going to talk about the two-party case, though the techniques are generalized. And as for adversarial behavior, we consider malicious adversaries who can deviate arbitrarily from the protocol. So we consider a not-signature scheme in this work, which is an elegance of the scheme with security based on the hardness of computing discrete logarithms. The scheme was initially hampered in terms of adoption due to a patent but is now seeing more deployment across the internet in the form of IDSA. The nice thing is that it's very easy to distribute with natural or threshold key generation of signing protocols. These are classic works. So let's take a look at how schnauz signatures are structured. So given an elliptic curve group bold G of prime order Q that's generated by capital G, the secret key is sampled uniformly from ZQ. And the public key is simply the secret key times the generator. So now let's step through signing. You begin by sampling an instance key K and multiply it with the group generator to get the nonce, hash the nonce with the message, and compute the signature as a linear combination of K and SK. The linear combination is weighted by E. So verifying the signature is simple. It's just checking the sign, the signing equation and the exponent. That's quite easy. So useful property about schnauz is the special friendliness. In particular, the signing equation is a linear function of K and SK, which is very easy to distribute with most natural secret sharing schemes. So for instance, in the two-party setting, we can produce additive shares of the signing K, SK, sample additive shares of the instance key K, exchange the corresponding nonces, compute the hash the usual way. And now shares of S can be computed simply locally as a linear combination of shares of K and SK. Exchanging these completes the signature. So this can be made maliciously secure quite easily, and it also generalizes to end parties. However, it turns out that security of schnauz signatures is extremely sensitive to the distribution of K. That is, even a tiny amount of non-uniformity can be leveraged to completely retrieve the signing key and break security. So this started with the hidden number problem that's formulated by a Boolean reputation, all the way to modern techniques that can be used to mount attacks with practical running times. This is a major concern in practice, because there are a variety of engineering problems that make acquiring fresh entropy quite difficult. So this is a systems-level problem that we can largely avoid with a simple graphic trick. So during one-time security generation, we can sample a seed, and then when signing a message, instead of sampling a fresh K, we can compute K as a deterministic function of the seed and the message being signed. So assuming that this function is a seed and added function, the security of the resulting scheme is as good as choosing K uniformly for each message. This is a classic idea that is more recently employed by the modern ADSA variant of schnauz. So let's try a simple attempt to determine a strict signing. In the two-party signing scheme that we looked at earlier, let's say we also sample some seeds on each of the devices. Let's do the trivial thing, and let's keep track of what information the adversary can learn when we execute this trivial protocol. So in this scheme, the adversary can collect first just the linear combination of the shares of K and SK of the honest party. This is just by honest execution. And now since the nonces K are derived deterministically, and honest party is always going to derive the same nonce, but an adversary could deviate from honest nonce derivation. And to see what effect this has, let's see what happens when an adversary uses a nonce K A star instead of the honest K A, and see how this error propagates through an honest signing through an instance of the signing protocol. So essentially, if we trace how this error propagates through the protocol, we find that the adversary now learns a different linear combination of the same secrets. That is because the honest party derives K A, K B deterministically as a function of the message in the seed and the message in the seed have not changed. So now this induces the honest party to reveal two linear equations and two unknowns. And this is bad because the adversary can now simply solve for the honest party share of the signing key and then leak and start the signing key in its entirety. This was first observed by Maxwell-Poster-Sodanville and it constitutes a rather specific instance of a general flavor of a problem that has previously been encountered in the context of resettable zero-knowledge or the resettable MPC, starting with the work of Gennady et al. So instead of evaluating the PRF on the message, let's try a different approach. Let's, for instance, we could try maintaining a counter and deriving the instance key by applying the PRF on the seed sampled earlier and the counter. So each time the counter is accessed, it is also incremented. And so this ensures that the instance keys that come out of this method are always fresh. However, this introduces a new attack surface that is reusing a counter is now equivalent to reusing a nonce. So it will be catastrophic to reuse the same counter twice. Unfortunately, undetectable reuse of stale state is a significant concern of practice, among other things, due to interruptions in power supply and rolling back to previous states, restoring from backups and also loading virtual machines with old snapshots. These are all events that can be adversarily induced or even occur due to careless mistakes. So this sounds like a systems problem. So we couldn't possibly leave up to the systems people to worry about. Indeed, Pano et al identified that state continuity, which is the problem at hand, is difficult even on dedicated and hardened, isolated devices. So broadly there are two flavors of general solutions and from the system literature. One is to use helper nodes, which is inapplicable to our dishonest majority setting where there's only one honest party. Or we could also use special purpose hardware as also showed in recent work. So this is unsatisfying for a couple of reasons. One, qualitatively, this introduces extra physical assumptions. We have to trust hardware. And quantitatively, it turns out that this is still quite slow. That is, Pano et al found a latency of about 60 to 100 milliseconds in incrementing an SJX monotonic counter. Others found even larger latencies. Of course, it's also expensive to purchase new hardware for each new deployment. And additionally, this also has a limited lifespan, that is in their work. They also found that non-volatile memory can wear out in just a few days of continuous use. So in this work, we study whether solving this at a cryptographic protocol design level could be useful. So we asked the question, how can we design a threshold short protocol that enjoys stateless deterministic signing? That is, we want no party to have the sample fresh randomness or the liar on updating state after each elevation. The idea is that we should be able to safely to store crash devices with long-term secrets, which is significantly easier to maintain. So we developed new techniques to construct schnaz threshold signing that's stateless and deterministic by design while using native cryptographic tools such as block ciphers. And we estimate the efficiency to be significantly faster than trusted hardware. So the high-level idea is really quite simple. So let's say we have some magic boxes that have embedded in them, each party's seed. So that when given the message M as public input, they produce the corresponding non-sys RA and RB and they deliver them to the opposite party. So this is secured by definition. Of course, the canonical way to instantiate such a box is to have each party commit to their respective seeds and subsequently prove in zero knowledge that they've derived that claim non-sys correctly when given a message M. So there's a mediative ways to construct such a proof system. And in order to make our choice of tools clearer, we lay down our priorities for the setting. So firstly, we want to focus on conservative choice of the pseudo-random function because the signatures which are exposed to the outside world strongly depend on the security of the PRF. Next, we would like lightweight computation. So we would like to retain friendliness to beaker devices such as mobile cryptocurrency wallets. And also this could be useful in high throughput settings. Finally, we want ground efficiency to match that of regular threshold snarl, that's three rounds. So we don't want to increase the latency. So the variety of candidate cryptographic tools to instantiate such a box and we recall some common candidates. We could use snarls, genetic MPC, MPC in the head, gobble circuits. But without constraints in mind, we can start building some of these out. In particular, matching ground efficiency of threshold snarl means that we can't clearly use genetic MPC because MPC protocols that are concretely efficient are also quite interactive and difficult to instantiate state-to-state deterministically. And the combination of requirements of low computation and standard PRS rules out snarls as they tend to be either heavy to compute or require custom arithmetic PRS for efficiency. So the recent work of necrophic survival from, I mean, constructed a custom PRS for which bulletproofs are very bandwidth efficient the order of just a kilobyte, but they tended to be a bit heavier to execute one second. So essentially their construction results in very compact proof, but they're a bit slow to generate and this is sort of the opposite end of the spectrum from what we aim to construct. So we're left with MPC in the head and gobble circuits for which there's an interesting detection and the work is zero knowledge for composite statements. So both of these tools are known to be efficient for boolean circuits, but not for algebraic operations. But the bridge to algebraic operations has been investigated in the works of Chase, Garnes and Bohassel and Bach, Sanzlick, Herzberg, Cartier and Pivlov. This is a great direction since AES and Scha have compact boolean circuit representations and we require such a bridge to elliptic curve algebra for Schnauer. However, our target is to achieve ideally only cheap symmetrically operations per proof, but existing techniques don't achieve this out of the box and this is why. In existing works, the fundamental secure computation object for instance, the garbage circuit is not the actual dominant cost. Instead, the dominant cost lies in logistics around it. In particular, the current techniques applied to our work would require an order of security parameter explanations due to homomorphic commitments and committed oblivious transfer instances that must be executed per bit of the witness. Concretely, this is in the hundreds the number of explanations. So alternatively, Chase at all show how to replace one specific part that is the commitments by gobbling a gadget that incurs a cost of about approximately quadratic in the number of security parameter number of gates, which is completely more expensive than a standard PRF. For instance, this gadget would completely cost eight times as much as a single AES instance. So we focus on the gobble circuit approach and develop new techniques so that the dominant cost of a proof in this paradigm is just that of the secure computation object that is gobbling and evaluating the pseudo random function. So we make use of the ZKGC paradigm of the overhead curve formula landi. I won't recall that protocol in the interest of time, but our contributions can be understood independently of ZKGC. If you're interested in learning how these techniques fit in context, I encourage you to take a look at our paper. We also use the conditional disclosure down compression technique of Ganesh, Myself, Patra, and Sakal. So we developed the following new techniques to tailor and improve this paradigm. Firstly, a goblin gadget to output the exponentiation of an encoded input, which improves an equivalent gadget in the work of Chase and others, and also applies to anonymous credentials. Additionally, we construct a custom committed OD protocol that allows us to pre-process most of almost all of the public key operations. So this makes input encoding now cheaper than gobbling the pseudo random function, and it also finds application in the distributed symmetric key encryption. So we face the task at hand as goblin the circuit C that can be expressed as phi applied to F, where F is a Boolean circuit and phi takes in the bit type presentation of some eta bit value y and outputs the elliptical point y times g. So consider the standard sequence in goblin the circuit that is producing the values that we have on screen and putting them into the correct algorithms. So we start with the composition of the input x, and then we encode it using these capital X values. Let's simply just choosing the appropriate capital X values. We evaluate them with the goblin circuit C tilde to obtain an encoded output Z tilde, which can then be decoded to get the clear outcome Z of the computation. So looking out of the hood, we split C tilde into two distinct components. F tilde, that's the goblin of the Boolean circuit F and phi tilde, that's the goblin of the exponentiation gadget. So F tilde receives this input X produces an intermediate set of y labels capital Y that corresponds to the intermediate string Y, and phi tilde translates this capital Y into the encoded output Z tilde. So we can use any standard goblin scheme for Boolean circuits to produce F tilde. Think of half gates for instance. And for phi tilde, we construct a new gadget that's inspired by the oblivious linear evaluation technique of Gibova. At high level, the idea is to begin by sampling a random beta and a bunch of alpha values. And we define alpha to be the sum of all the alpha values. So the goblin circuit itself is going to be a collection of ciphertext pairs. They're structured in such a way that the nth pair allows the evaluator to decrypt alpha n times yn times two to the n times beta. The idea here is that when you add all of these up, we obtain alpha plus y times beta, which to give the name, let's call it Z tilde, lowercase. Now multiplying little Z tilde by G gives us the full encoded output capital Z tilde, which can easily be decoded by subtracting out alpha and dividing out beta. So the intuition for security is that first authenticity of the goblin circuit, which corresponds to soundness of the ZKGC proof, comes from the fact that the mechanism with alpha and beta serves as an information theoretic map. So forgery is as hard as either guessing these values or breaking the encryption scheme. Uniqueness for the goblin output, which is what gives us the knowledge of the ZKGC proof comes from the fact that once alpha beta and Z are fixed, we can simulate Z tilde perfectly. Essentially, this technique is very similar to simulating a schnaud signature or a schnaud proof of knowledge. As for efficiency, this gadget is equivalent to goblin two log Q and case, which is a substantial improvement over a similar gadget from prior work. So the result of this is that the cost of this particular operation is now insignificant compared to the cost of goblin the PRF circuit itself. So we've eliminated one of the logistical costs. So now we turn to the other dominant logistical cost that is committed OT, of which one instance is needed for each bit of the business. So the additional instruction that committed OT offers on top of regular OT is the ability to open both messages that the sender had sent earlier. Unfortunately, it's unclear how to pre-process public key operations of committed OT in such a way that the online phase is efficient, which in our setting is going to be essentially non-attractive. And the correlations are still usable after executing the open phase. So we relax the problem a bit by dealing it to our setting. That is we want the receiver to input its choice bit once and then we want an unbounded number of instances with this fixed choice bit thereafter. This is fine in our setting because the witness is the PRF seed and it's going to be the same for all proofs. Let's try a naive attempt at solving this problem. We can have the sender sample some seeds on zero and one, which provides the OT and the receiver retrieves the seed corresponding to a choice bit B. So online, given some instance X, the sender will compute pads P zero and P one by applying the PRF on its respective seeds on X. And then it's going to use P zero and P one as pads to encrypt messages and zero and one. So of the two of these, the receiver is able to decrypt the message corresponding to its choice bit B as it has the seed for that message. During the open phase, the sender simply provides the pads that it had sample, that it had computed earlier. And the receiver is able to then decrypt the ciphertext that it wasn't able to decrypt earlier and retrieve the opposite message. So to see what goes wrong, let's see what happens when the sender is cut up. Let's fix a complete bit for the receiver, let's say zero. Now the sender can simply change its claimed pad P one in the open phase to some P star, which then kicks the receiver into thinking that the message that it couldn't decrypt earlier was instead some M one star. So this error propagates upwards in the CKGC protocol. So we can solve this problem using university composable commitments. Specifically, we use UC commitments that permit the following algorithms. First, a setup that produces a chapter, a commitment key and a verification key, a commitment algorithm that produces a commitment and some decommitment information and of course a method to verify this. And finally, a state line algorithm that extracts the committed message given the chapter. Conventionally, this extractor is simply a proof artifact but in our case, we're actually going to execute this proof artifact in our construction. So once we had the subtraction in place, constructing the committed OT protocol is quite simple. We have a setup functionality that provides two commitment keys to the sender and the corresponding verification keys to the receiver. And additionally, the receiver can choose the chapter for one of these two commitment keys. So in order to send a message, the sender commits the messages M0 and M1 using the corresponding commitment keys and sends over these commitments to the receiver who then runs the state line extractor in order to retrieve the message that it wanted to choose. During the open phase, the sender now sends the decommitment information for both of the messages. The receiver is able to verify that these messages were indeed the ones that were committed to earlier. So security for this construction follows directly from state line extraction and equivocability of the commitment scheme. And we instantiate this commitment scheme using ideas from the literature on UC commitments from error-correcting codes. Of course, we have to do some extra work to obtain stateless determinism. So committing, verifying and extraction requires only PRF evaluations, which is really nice and some hashing. And this translates to really good efficiency and the online complexity of committed OT, which is what we were looking for. So concretely, for the parameters that we care about, you can think of the computation and bandwidth cost as about the same as gobbling a single AES circuit, which is significantly cheaper than actually gobbling AES for the PRF because we will need multiple AES invocations to derive in unbiased amounts. So in summary, we show that the ZKGC paradigm is well suited to enabling stateless determinism and threshold channel when we care about computational efficiency as standard assumptions. The dominant cost of previous techniques applied to our setting was mostly in the logistics rather than the actual secure computation object, which we developed new techniques to fix so that the dominant cost is now just the secure computation object that's gobbling and evaluating the PRF. So our cost analysis estimates that the computation of our construction should be considerably faster than use entrusted hardware. So I encourage you to look at the paper for a concrete cost analysis, a number of optimizations and tricks, and of course details on the high-level ideas that I spoke about. Thanks for your attention. I hope I motivated you to read our paper, which can be found on my homepage, and please feel free to email me with any questions. Thanks.