 Hello everyone, I am Trabani Patil and today I will be presenting our work attaining GOD beyond Honest Majority with Friends Enforce. Let me begin by defining Secure Multi-Party Computation or MPC. MPC allows n mutually distrusting parties, say P1 to Pn, each holding a private input to compute a joint function of their inputs. This distrust among parties is modelled as a centralized adversary which can corrupt T out of the n parties. Any MPC protocol is required to satisfy the following properties, correctness of output computation and privacy, that is nothing beyond the function output should be revealed to the adversary. Although the classical definition of security takes into account what is revealed to the adversary, it does not account for the information leakage towards the honest parties. That is, an adversary can leak information towards honest parties by sending across its entire view and this wouldn't be considered as a privacy breach. In fact, this loophole in the security definition has been leveraged by several protocols in the literature, especially those for small population MPC, to achieve the strongest security of guaranteed output delivery or GOD, which ensures that the honest parties receive their output irrespective of the adversary's strategy. The way these protocols achieve it is that they require all the parties to send their private inputs to a designated party which is identified as a trusted party or a TTP. Now although this helps in achieving GOD, in practical scenarios where MPC is deployed, the parties may actually be servers owned by different companies. In this case, revealing information to other parties is not a viable solution. This is where the notion of friends and foes security defined by Alon et al. comes into the picture. Here, instead of considering every party to be purely honest, they try to model every party as at least being semi-honest. And towards this, in addition to considering a malicious adversary which can corrupt T parties, they consider another adversary which can semi-honestly corrupt up to its star of the remaining parties. To capture view leakage, the malicious adversary is allowed to send its entire view to the semi-honest parties. And now the security is required to hold in the face of both these adversaries. In fact, in the work by Alon et al. they also show that fairness or GOD is possible if and only if 2T plus H star is less than N in this setting. Now given that this is a more practically relevant notion of security, we focus on achieving friends and foes security in our work. In fact, our contributions in this work are twofold. On the theoretical side, we show the necessity of oblivious transfer, O T, for constructing a generic T, H star FAF secure protocol when N is less than or equal to 2T plus 2H star. Note that this subsumes the optimal corruption threshold of 2T plus H star. Given this, on the practical side, we focus on the NPC first small population setting and construct our four-party protocol QuadSquad which is secure against one malicious and one semi-honest corruption in the friends and foes mode. We have two variants of our protocol, FAIR as well as GOD. Note that the corruption threshold we have considered is optimal as per the bounds given in Alon et al's work. Further, to ensure efficiency of our protocol, we operate in the pre-processing paradigm and construct the protocol over rings to leverage the CPU architecture. And finally, we show the application of our protocol in the domain of privacy-preserving machine learning to show its practical relevance. We compare our protocol to the state-of-the-art four-party protocols in the honest majority as well as the dishonest majority setting. We note that our FAIR as well as GOD protocol have a comparable online cost to that of the honest majority protocols while tackling a stronger adversary with an additional semi-honest corruption. And although the pre-processing cost of our protocol is higher in comparison to them, this is justified by the necessity of oblivious transfer which we have already proven. On the other hand, both our protocols outperform the dishonest majority protocol of mascot in the pre-processing as well as the online phase while elevating the security guarantee from abort to GOD. As mentioned before, we have benchmarked our protocol for the neural network inference task over the MS dataset. And here as well, we note that although the total communication cost of our protocol is higher than the honest majority protocols, this is primarily due to the pre-processing cost. In fact, the online communication is still comparable to the honest majority settings and this makes our protocol a practically viable option, especially given the stronger adversarial notion that it considers. The code for our protocol is publicly available at the following link. Thank you.