 This paper aims to expand the critique of algorithmic biases by Joy Bullum-Winnie and Ruhar Benjamin to the African continent, specifically assessing their prevalence in Daul-E2 and Stari AI text-to-image generators. The study found that Daul-E2 underperformed when generating images of an African family compared to a family, while Stari AI outperformed, but had poor accuracy in portraying culture. The paper highlights the need for more inclusion to address cultural inaccuracies and advocates for algorithmic equality and fairness. This article was authored by Blessing Balakar.