 Meta learning and GPT. GPT models are a stack of transformer decoders. They were built to understand and solve language tasks like translation and question answering. They do this with a pre-training phase to understand language via language modeling and a fine-tuning phase in order to answer some specific tasks. One issue with fine-tuning is that we need thousands of examples for each task we want the model to do. And this isn't human-like as humans typically learn with just a few examples. Wouldn't it be nice that instead of fine-tuning, we give our pre-trained models some instructions with a few examples, or even just one example, or sometimes just zero examples, and then ask them to produce a response? This is exactly what few-shot learning, one-shot learning, and zero-shot learning are all about for GPT. This is the crux of GPT version 3.