 This paper proposes a method for comparing the encoding of spoken language by the human brain and by artificial neural networks. It finds that the brain and the networks encode the same acoustic properties in a similar way, suggesting that they may share some underlying principles of processing. This could have implications for understanding how the brain processes speech and how it might be modelled computationally. This article was authored by Gepa Begu, Alan Zhu, and T. Christina Jiao.