 Hi, I'm Yash, and this is the advertisement for my talk on improved straight line extraction in the random molecule model with applications to signature applications. This is a joint work with Abhi Shalad. So the call that a sigma protocol consists of the commitment, a challenge, and a response that's exchanged between a prover and a valid file. The FIHME transformation gives a simple way to compile a sigma protocol to a non-interactive proof with a suitably chosen hash function. Intuitively, the prover simply hashes the statement along with its first message to replace the verifier's challenge. The security of the FIHME transformation is proven in the random molecule model with a so-called forking strategy, which involves done in the prover twice and choosing the random point in its queries to the random molecule to fork a type execution in some sense. The FIHME transform has a number of advantages. It is simply described in implement, and it's also very efficient. It costs roughly the same as the input sigma protocol to prove and verify. On the other hand, the forking strategy that I just showed doesn't compose, and it's unclear how to prove the current security when using the FIHME transform. And also there's this quadratic security loss that comes from having to run the prover twice. Straight line extraction on the other hand, which was formalized by PaaS in the random molecule model, involves running the prover just once and having the extractor simply lead the queries to the random molecule to deduce the witness for the statement. And PaaS showed that this sort of extraction strategy is amenable to the current composition. And in the same paper, he gave a simple cut and choose construction to achieve straight line extraction. Fishlin, in a subsequent work, gave a straight line extractable compiler that avoided the cut and choose logistics of PaaS through a clever proof of work type idea. Intuitively, the prover sends over transcripts of the form AeZ to the verifier such that the transcript hashed the all-zero string. Again, for the suitably chosen hash function. In this work, we explore two dimensions of Fishlin's compiler. That's computation cost and its applicability. In particular, the computation cost tends to be in the bottleneck in a number of applications. And so we ask if we can improve on it. In terms of applicability, Fishlin only proved that his compiler applies to a restricted class of segment protocols that satisfy a notion of quasi-unique responses. And this doesn't, for example, include the logical or composition segment protocol or Peterson's proof of knowledge of a Peterson commitment and so forth. There is a folklore that Fishlin's transform works right away anyway. And it's just a matter of writing down the proof. And we explore this in our work. So to begin with, we give a lower bound that explains the lack of progress in Fishlin's original work, showing that Fishlin's protocol is optimal up to a small constant in terms of computation. But on the other hand, we do show that application-specific optimization is possible. In particular, we show up to a factor of 200 improvement for the application of NDSA signature application. And in terms of applicability, we show that the folklore is actually wrong. We give a new attack on business and distribution that your Fishlin's transform in certain contexts. And we show that this attack can be fixed by a simple randomization mechanism. So I hope to see you at the talk. And you can also find our work online or at this link. Thanks.