 Dropout regularization is an effective technique for improving the robustness of deep learning models against adversarial attacks. It also leads to increased functional smearing among neurons in the network, which further enhances its resilience to adversarial attacks. The optimal level of functional smearing depends on the dropout rate, with higher dropout rates resulting in more smearing and thus better protection against adversarial attacks. This article was authored by Nita Sardar, Sundas Khan, Arun Hinsi, and others.