 How can we tell what's in a voice? A new study is confirmed that specific brain regions called voice patches help us process the information contained in voices. The findings are published in the proceedings of the National Academy of Sciences. The human voice is sometimes called an auditory face because besides words it contains tones and inflections that convey a speaker's identity and emotional state. Just as certain face patches in the brain help humans and other primates process faces, analogous voice patches might help humans process voices according to functional MRI studies. However, the functions and connections of voice patches aren't clear. To learn more, researchers recently took advantage of electrodes that had been placed in the brains of five patients with epilepsy who were native Chinese speakers. They used these electrodes to study brain activity in response to various types of sounds, including voices of Chinese and English speakers. Electrodes in some brain areas responded significantly to the sounds, and the response patterns fell into five main categories with different degrees of voice specificity. In the temporal lobe in both hemispheres, the voice-specific electrodes were located in three distinct clusters, the posterior, middle, and anterior temporal patches, or PT, MT, and AT. In both hemispheres, the PT and AT responded nearly exclusively to human speech, while the MT responded to human speech and human or animal vocalizations. The right hemisphere, AT, responded to both Chinese and English speech, but the left hemisphere, AT, responded only to Chinese speech. Since Chinese was the subject's native language, this finding suggests a key role of the left AT in native language processing. Interestingly, certain motor areas in the left hemisphere were also involved in processing human speech. The voice patches were interconnected under resting conditions and during sound processing. Based on the timing of the electrode responses, the researchers concluded that human voice sounds were processed in a dual hierarchy system. The sound was first processed by the MT and then moved in two directions, from the MT to the AT and from the MT to the PT. Notably, the sample size of this study was small and all subjects were from the same country and spoke the same language. In addition, all subjects had epilepsy. Larger studies on more diverse populations with normal neurological function are needed to ensure these findings can be generalized. Overall, this study reveals a network of voice patches in the human brain that enable us to distinguish human speech from other sounds and to recognize the unique patterns of our own language. The findings help decode the human voice processing system and highlight the similarities between face and voice recognition pathways in the human brain.