 Five methods to convert language to vectors. One, n-gram vectors. These are sparse vectors that are constructed by counting the number of unigrams, bi-grams, n-trigrams, and so on in a given sentence. Two, TF-IDF vectors. This is also a sparse vector where every position quantifies how important a word is to that sentence. Three, neural probabilistic language models. This is a neural network that learns to predict the next word in a sentence. It learns dense vector representations to solve the curse of dimensionality. Four, word-to-vec. It's a simple architecture to learn the mappings between words and vectors. The vectors are fixed even though the context of the words may change. Five, neural bag-of-words. It's a neural network architecture that takes the average of word vectors to get a sentence vector. Subscribe to the channel for more information.