 Hello, everyone. Welcome to the longer version of the talk for our paper published in TEACHES, Cyber Channel Protections for Picnic Signatures. This is a joint work with Tilo Aranya from OC University, Sebastian Berund, Thomas Eisenbars, and Luca Wilk from the University of Lugek, and Greg Zabirucha from Microsoft Research. I am Akira Takahashi from OC University, and I will be talking about the overview of our main result. In the second part, Okan Sekil from University of Lugek will give more details about our implementation, as well as the concrete side channel leakage analysis. So this research is motivated by the fact that the side channel resistance of cryptographic schemes is becoming more and more relevant as they are deployed in real-life conditions. It is also one of the important evaluation criteria of the NIST post-quantum cryptos standardization process. However, unlike other candidates, there has been little study on side channel resilience of the Picnic Signature schemes. More generally, signatures based on NPC in the head, zero-knowledge group. So we want to push forward this line of research. Let's briefly recap the current state of the art. Picnic is a fiat-chamele-type signature derived from NPC in the head, zero-knowledge proof of Ishai et al. It has a couple of nice features. For example, its security doesn't rely on any number of theoretic assumptions. Essentially, the only assumptions we need are security of block cipher and hash function model as a random model. It also supports various parameters and the different signing methods. However, the first version of Picnic implementation was shown to be vulnerable to differential power analysis. Although a masking count-to-measure was proposed last year, it left several practical challenges. For example, it had to change the format of output signature, which breaks into operability with an existing verification algorithm. Moreover, it increases the signature size depending on the masking order. These features seem undesirable in practice and, in fact, other masked post-contam signatures like DvGiAM or Qtesla didn't have such issues. On the other hand, there's also an updated version of a Picnic called Picnic 3, which follows the NPC in the head paradigm extended with the so-called pre-processing phase and leads to more compact signature. Unfortunately, Picnic 3 has never been evaluated from a side-to-channel perspective. So in this research, we address essentially two questions. First, could a side-to-channel attacker exploit NPC in the head with pre-processing to attack Picnic 3? Second, can we maintain interoperability and signature size while applying masking count-to-measures? We answer these questions with the following results. First, we identify two types of side-to-channel vulnerability of Picnic 3. While the first one is a direct adaptation of the previous attack on Picnic 1, the second attack is new and exploits specific properties of NPC in the head with pre-processing paradigm. As a count-to-measure, we suggest a generic approach to mask zero-range proof using NPC in the head with pre-processing. In the paper, we prove that our masked signing operations satisfy the standard masking security notion called the T-proving security. And we further support our claim using the formal verification tool called Mask Verif. To achieve provable security, we have to sacrifice performance due to masking overhead. But we also suggest some heuristic ways to improve performance by partially trading formal T-proving security. Still, security of this heuristic can be validated with experimental leakage analysis. We then apply our generic masking count-to-measures to Picnic 3 so that we can achieve first-order masking implementation. As a side contribution, we also publicly release a masked SHA-3 implementation used as a building block. We finally conclude with the practical electromagnetic side-channel leakage analysis. Okan will give more insights on the latter two points. So I'm going to show how side-channel attackers can steal the secret signing key of Picnic 3. Let's take a look at how zero-range proof using the NPC in the head works. So here the prover holds a secret shared key and the verifier has some circuit description F as well as its output X when F is evaluated on the secret key. Using the secret shared key, the prover executes some multi-party computation protocols in her head using some imaginary parties. And each party outputs some view consisting of secret key share, randomness and all the incoming messages in the protocol. She commits to every view and on receiving some challenge index, she reviews a subset of views. Then the verifier essentially checks that NPC was executed correctly. So that's an overview of NPC in the head paradigm. The first attack is relatively simple. Of course, the unopened party's view is very sensitive because it contains the remaining share of secret key. So if side-channel adversely contains some leakage information of that share, they can immediately recover the secret key. This type of attack was already discovered against Picnic 1 and we can show that essentially the same attack also applies to Picnic 3 in a direct manner. So what about the NPC in the head with pre-processing? In this extended paradigm, the NPC protocol is divided into two phases. The first part is offline, meaning that it can be executed independently of any input values and parties use the random seed to pre-process some state information. Then in the offline phase, parties can efficiently perform the actual computation by making use of pre-processed states. Once NPC protocol is done, the prover commits to both online and offline phases. Now the challenge has two dimensions. The first part, B, indicates whether offline or online phase is to be reviewed. In the formal case, a prover simply opens all the random seeds used for the offline computation, which contain no sensitive information. If the online phase is to be reviewed, then the prover essentially opens all but one views as usual and verify your check stat either offline or online phase is executed correctly. So before describing an attack, let's see why and how pre-processing is used. In the NPC setting, multiplication of two secret shared inputs is often costly. So we want to save online computational costs by pushing some work to the offline phase. Here's a standard trick. In the offline phase, parties generate a lot of random sharing, which are so-called river triples. They do not depend on inputs to the circuit, and in particular, they can be easily generated in the NPC in the head setting. Since the prover can essentially manipulate all parties however they like. In the online phase, you can safely reconstruct the shares after adding pre-processed randomness lambda to secret inputs x and y. Then by rewriting the multiplication equation using the random triple, you can easily compute the result without any nonlinear operations. However, you can notice that this offline online paradigm is actually another attack surface and that can be exploited by side-to-channel adversaries. So in this attack, we assume that the offline phase is reviewed. In that case, all pre-processing information from the offline phase is made public. So if the attacker has approved the right information from the online phase, they have enough information to compute the secret. As a security of the NPC protocol only holds when one of the pre-processing shares is private. Importantly, this attack works independently of the number of parties in NPC. So you cannot mitigate it by simply tweaking the number of unopened parties which you might do using the previous countermeasure. So we are motivated to design different approach. So on a high level, in our masking countermeasure, the prover essentially shares the shares. Concretely, each party's share is split again into some shares and every party internally does their computation in a masked way. Accordingly, all the views are maintained in a secret shared form until the prover learns a challenge. Once she obtains the challenge, either she can keep them in a secret shared form when the offline phase is reviewed or she only needs to reconstruct the views of opened parties when an online phase is reviewed. So in this way, even if the adversary gets information of some share, there's always at least one share of the view that remains completely hidden. So with this approach, we don't have to change the number of parties, so it neither breaks interoperability with existing verification algorithms nor introduces any overhead in signature size due to masking. While we can prove that it actually meets the standard masking security notion. However, one caveat is that we also have to mask seed expansion and the commitment computations because the prover doesn't know which offline or online phase can be made public before receiving a challenge. In particular, Picnic 3 employs the SHA-3 hash function to commit to states and all the online messages. As we will see in benchmark, masking all these hash invocations is expensive and we would like to avoid that in practice. So accordingly, we also provide some heuristic options to partially unmask some non-sensitive hash computations. The rationale behind this choice is as follows. First, since the signing operation is randomized, some hash inputs that are unique per signature are not sensitive by regarding SHA-3 as a random local and assuming that the attacker only gets to see T input bits of SHA-3. Second, commitment outputs are part of the signature, so they are definitely not sensitive. Under such a heuristic assumption, we can actually selectively mask half of the SHA-3 computations. Although we lose a formal T-proven security guarantee, we were able to experimentally confirm that no leakage occurs from this heuristic version of implementation. So that's it from me. Now, I'm passing the button to Okan, who is going to talk about further practical aspects of our results, including benchmarks and leakage analysis of Picnic 3 implementations. Thank you, Akira, and I would like to welcome everyone again. Let me start with the performance results of our implementation. In this table, you can see the benchmark results and the interesting column is actually the overhead. So the highlighted row corresponds to the unprotected Picnic implementation, and as you can see, hashing covers the 70% of the operations. So by adapting the masking technique of the SHA-3, we managed to reduce the overhead 1.8 from 5.4. Here we can use fully provable secure masking and mask every hashing, or as Akira described, you can selectively choose hash function and mask only the sensitive ones. So I would like to continue with our practical setup. So in this picture, you can see an overview of our setup. As our capturing device, we use the Tectronix MS-06 with two different sampling rates. And as our target device, we have used STM32 Discovery Board, which is suggested by the PQNM4 project with an ARM Cortex-M4 clocked at 128 MHz. As our test environment, the leakage analysis code is working with the PQNM4 GitHub repo, thus the analysis can be reproduced easily. And as our searching source, we have chosen a blurring capacitor as seen in the picture and placed our electromagnetic probe close to this point, because that point actually represents the power consumption the best. And our analysis tool, we have used TVLA. And this is a very simple statistical tool used in the literature for leakage analysis. It's a pass-fail test to determine if an implementation has a leakage or not. What we call leakage, in this case, of course, the data-dependent behavior of the device. Of course, the analysis should be implemented carefully. Even small changes within the implementation, such as usage of registers, response time of your random number generator, or a desynchronization between a target and the control PC detected by the test. Even if those are not exactly exploitable leakage or real leakage, those leakages can be seen. So there are two different versions of this. The first one is fixed-verse random. And this one is to detect all first-order leakages of a device. And the idea is to process either a fixed or random data, and therefore this will give you a data dependency through the implementation by comparing the traces belonging to a fixed data and with the traces belonging to a random data. And the second one is random-verse random. In this case, we always process a random data. However, the classification depends on a single bit inside the implementation. Therefore, the second method is used to observe specific targets in an implementation. And our goal in this leakage analysis is to first show that the scrap attacks in the earlier parts of our presentation are indeed possible. And of course, show that our mask picnic tree implementation is leakage-free. So the first attack is the same as by Gailerson et al. In this attack, we use the values from the precomputation phase and the highlighted values correspond to the attack on the open phase or probe one and an open party. And we see around 1,000 traces that the leakage becomes absurd, as you can see from the graph on the right-hand side. So in our second analysis, we implement the random-verse randomities to see and actually verify the leakage inside the unprotected picnic tree implementation. So as you've seen in the previous part of our presentation, the first attack is novel and cannot be eliminated by approaches such as SNI and DHEAD. Here we can see the highlighted values are opened and we are measuring a single online simulation. And using the highlighted values, we manage to see the leakage and verify this attack. So the attack on the unopened online phase. Moreover, we can see that with less than 3,000 traces, the leakage becomes clear. So next, we proceed with the leakage analysis of SHA-3. So we use fixed versus random static and hash a random melody or a fixed value. So why did we focus on hashing? Because as you see in the benchmarking asset, the 70% of picnic implementation is hashing and therefore we have to protect the SHA-3 part of our implementation. First of all, as standard check, we disable the masking by forcing the mask value to zero. Here you can see the leakages everywhere of the implementation and becomes a Sikai record even 2,000 traces. And when we enable the masking again, we see that the leakages are gone even with the 1,000 traces and there is no leakage. And finally, we implement the leakage analysis of the whole picnic tree implementation. We measured from the signature until the end of the first MPC instance. And of course, including an online and offline phase. So we worked with fixed versus random keys and used fixed or random key and fixed message. So we worked with fixed versus random set up. We used a fixed or random key, fixed message and randomized signature for each case. So thus, the only thing changes between those two cases is actually the secret key itself. And we have observed that the test results are actually below the threshold value for all 8 million sample points. Moreover, we have observed that the maximum details will have a stable pattern, as you can see in the right-hand side and remark that if you remember from the previous attacks or catch-up analysis, a real leakage has a clear increasing pattern even with a small number of traces. And here we can see a stable pattern even after 1 million traces. So as a result, what we have shown in this work is such analysis on KKW protocol is a real truth and open values are actually act as open free props for adversary. And we have provided a mask MPC in that paradigm that can be worked with preprocessing. And we implement our idea to picnic tree and see that with an overhead of 1.8 to 5.4, we can achieve first order protection. We have also provided a mask shatter implementation that is optimized for M4 for different options. So thank you for your attention. If you have any questions or comments, we gladly answer them. And you can find more details in our e-print. Or if you would like to take a look at our implementation, you can check our GitHub page. As we said before, this is fully connected with GitHub for a project so that you can actually reproduce benchmarking or reproduce analysis codes quite easily. Thank you.