 GPT versus Chat GPT. GPT and Chat GPT are language models. They both have some understanding of language. GPT is pre-trained on language modeling and can be fine-tuned on any specific task. Chat GPT makes use of a pre-trained GPT. This model is fine-tuned on answering a user's input prompt. On top of this, Chat GPT makes use of reinforcement learning to further fine-tune its parameters to ensure that the speech it generates is in fact factual and non-toxic. On GPT's side, instead of just fine-tuning, we could also use meta-learning to perform different NLP tasks. This ensures we don't require too much data on separate NLP tasks to perform them. Chat GPT, on the other hand, at least in its first iteration, only uses fine-tuning. This is probably because most language problems are solved better with fine-tuning despite the requirements of lots of data.