 This study evaluated chat GPT and GPT4 on USMLE questions involving communication skills, ethics, empathy, and professionalism, finding that GPT4 outperformed chat GPT with a correct answer rate of 90% compared to chat GPT's 62.5%. GPT4 showed more confidence and did not revise any responses, while chat GPT modified its original answers 82.5% of the time. The performance of GPT4 was higher than that of AMBOSS's past users, indicating AI's potential to meet the complex interpersonal, ethical, and professional demands intrinsic to the practice of medicine. This article was authored by Dana Brinn, Vera Soren, Akil Vaid, and others.