 This study used chat GPT, chat generative pre-trained transformer, powered by GPT-3, generative pre-trained transformer 3, to generate a fraudulent medical article related to neurosurgery. The AI language model was trained on a massive corpus of text, and can create highly convincing fraudulent papers that resemble genuine scientific papers in terms of word usage, sentence structure, and overall composition. However, expert readers may identify semantic inaccuracies and errors upon closer inspection, highlighting the need for increased vigilance and better detection methods to combat potential misuse of AI in scientific research. This article was authored by Martin Majofsky, Martin Cerny, Matei Kazil, and others.