 This study compared the performances of GPT 3.5 and GPT 4 on the Japanese Medical Licensing Examination, JMLE, to evaluate their reliability for clinical reasoning and medical knowledge in non-English languages. The results showed that GPT 4 outperformed GPT 3.5, particularly for general, clinical, and clinical sentence questions, and achieved the passing criteria for the JMLE, indicating its potential as a valuable tool for medical education and clinical support in non-English-speaking regions. This article was authored by Sashi Takagi, Takashi Watari, Ayano Arabe, and others.