 The performances of the currently available large language models are impressive in many ways. They not only answer questions of all kinds in superb style, but they can also take over tasks such as generating programming code, performing mathematical calculations or analysing language data. They can even transcribe texts phonemically a skill that is part of the competence portfolio of well-trained human linguists. But are the results qualitatively OK? Let's explore this using an English example. This is Chat GPT's solution. What about the quality of this answer? Well, first of all, even though the proposed solution contains some mistakes, honestly, in everyday university life I have rarely received such good solutions. Nevertheless, there are mistakes. They concern dialectal aspects, the use of phonetic symbols and the application of connected speech principles. In present day English, the most important phonological dialects are the British variant received pronunciation and North American English. The most obvious difference between them is the absence of the post-vocalic R in RP. Hence words like hour, for, lettered, word, youngster, dear and her should not contain a final R in RP. The GPT solution is pretty inconsistent here and exhibits three mistakes. The choice of symbols depends on the transcription system chosen. For example, in a pure IPA system, the epsilon is used in words like bless. In the LPD, that is the Longman pronouncing dictionary notation, which is commonly used in education, the simple letter E is used instead. In our text this concerns seven cases. The explanation of connected speech effects, that is assimilation, illusion, liaison and weakening, requires a fair amount of background knowledge. Whereas the first three are optional, weakening is an obligatory feature of present day English. In the text it applies to twelve monosyllabic function words, which, when unstressed, are pronounced weakly. However, this does not apply if they are at the end of a sentence, as in her. The result exhibits three mistakes. In summary, despite a total of eleven mistakes, chat GPT's proposed solution is impressive. But at closer look, it is still buggy. We may ignore simple symbolic errors and correct them via copy and paste. For the remaining errors, however, only the appropriate background knowledge can help. And we have to acquire that, not by reading texts, but by training and intensive practice. By the way, there are also other transcription programs, for example this one. This is more accurate, but still not error free. So my advice remains the same. Practice, practice, practice. How, you might ask? Well, I'll show you in another video.