 What is Word2Vec, a language model that predicts the probability of the next word in a sentence? Is it a method of clustering words based on their semantic similarity? A self-supervised learning framework for learning word embeddings? Or a technique of part and speech tagging in natural language processing? I'll give you a sec. The answer is C, a self-supervised learning framework for learning word embeddings. Word2Vec was introduced in 2013 and the original paper introduced the continuous bag of words and skip-gram architectures, but later included others like glove and fast text. For more information, you can go to the channel and watch the full video.