 We propose a class of multiplicative algorithms for non-negative matrix factorization, NMF, that are robust with respect to noise and outliers by introducing a new family of generalized divergences called alpha-beta divergences, AB divergences, which can be parameterized by two tuning parameters, alpha and beta. By adjusting these parameters, we show that a wide range of standard and new divergences can be obtained. The corresponding learning algorithms for NMF are shown to integrate and generalize many existing ones, including the LISUNG, ISRA, Image Space Reconstruction Algorithm, EMML, Expectation Maximization Maximum Likelihood, Alpha NMF, and Beta NMF. Owing to more degrees of freedom in tuning the parameters, the proposed family of AB multiplicative NMF algorithms is shown to improve robustness with respect to noise and outliers. The analysis illuminates the links of between AB divergence and other divergences, especially gamma and idocouricidodevergences. This article was authored by Andrei Chihaki, Sergio Cruces, and Shani Chiamari. We are article.tv, links in the description below.