 Recurrent neural nets. They are feedforward neural networks rolled out over time. As such, they deal with sequence data, where the input has some defined ordering. This gives rise to several types of architectures. The first is vector to sequence models. These neural nets take in a fixed size vector as input, and it outputs a sequence of any length. The second type is a sequence to vector model. These neural networks take in a sequence as input and spits out a fixed length vector. Sequence to sequence models is the more popular variant, and these neural networks take in a sequence as input and outputs another sequence. However, RNNs have some problems. RNNs are slow. So slow that we use a truncated version of back propagation to train it. And also, they can't deal with long sequences very well. We get gradients that vanish and explode if the network is too long.