 What is the history of AI and text summarization? The 1950s saw the start of rule-based approaches that would extract sentences based on word frequencies. During the 1970s, we used rational methods. They would summarize as humans would and not simply extract sentences, but they depended on handcrafted rules and were not that accurate. With the 90s and 2000s, we transitioned into statistical approaches to build classifiers to determine if a sentence belonged in the summary or not. But this means that we went back into extracting sentences. Now with the rise of more data and compute power in the 2000s, neural networks were used for summaries. LSTMs were used for longer inputs and outputs, and today we make use of models like GPT, where a pre-trained language model is fine-tuned on the text summarization task, and so we don't need as much text summarization data. Follow for more information on artificial intelligence.