 The first thing to notice about deep learning by Ian Goodfellow is a cover. At first glance, it looks like a university, but on closer inspection, you'll see this image is painted with puppies. This book will give any reader a solid foundation on deep learning techniques and vocabulary. Coming from the statistic side, I felt like the vocabulary was a big help. For example, when a statistician uses a word inference, they mean how little n generalizes to begin. And deep learning, inference, is what a statistician would call prediction. Another example is the word bias, and deep learning bias is what statisticians call the intercept. And statistics bias is a non-normal pattern in the residuals. So to recap, if you're coming from statistics, this book is a nice Rosetta Stone. This book has one of the best linear algebra reviews I've seen. If you feel like there are gaps in your matrix math, chapter two would fill those gaps. The notation is very standard to what one would see in a college level in your algebra course. I was also pleased to see a whole chapter devoted to Monte Carlo methods. This was much more my language, and was a treat to read through. They also have a little section on pseudo likelihood, which I thought was impressive. Everything you would expect to see in a deep learning book is here. If the terms, rectified linear unit, auto encoders, RNNs, deep ultimate machines are foreign to you, then pick up this book and read through it. It will lay a good foundation.