 Abstract Chat GPT is a conversational agent designed to provide moral advice to users. Despite its potential benefits, our research found that Chat GPT's advice was inconsistent and could lead to unintended consequences. To address this issue, we suggest both improved design of Chat GPT and increased user education on digital literacy. Additionally, we argue that transparency alone is insufficient to ensure responsible use of AI. This article was authored by Sebastian Krügel, Andreas Ostermeier and Matthias Ull.