Visual Image Reconstruction from Human
Brain Activity using a Combination of
Multiscale Local Image Decoders.
Yoichi Miyawaki,1,2,6 Hajime Uchida,2,3,6 Okito Yamashita,2 Masa-aki Sato,2 Yusuke Morito,4,5 Hiroki C. Tanabe,4,5
Norihiro Sadato,4,5 and Yukiyasu Kamitani2,3,*
1National Institute of Information and Communications Technology, Kyoto, Japan
2ATR Computational Neuroscience Laboratories, Kyoto, Japan
3Nara Institute of Science and Technology, Nara, Japan
4The Graduate University for Advanced Studies, Kanagawa, Japan
5National Institute for Physiological Sciences, Aichi, Japan
6These authors contributed equally to this work
Perceptual experience consists of an enormous number
of possible states. Previous fMRI studies have
predicted a perceptual state by classifying brain
activity into prespecified categories. Constraint-free
visual image reconstruction is more challenging, as
it is impractical to specify brain activity for all possible
images. In this study, we reconstructed visual images
by combining local image bases of multiple scales,
whose contrasts were independently decoded from
fMRI activity by automatically selecting relevant voxels
and exploiting their correlated patterns. Binarycontrast,
103 10-patch images (2100 possible states)
were accurately reconstructed without any image
prior on a single trial or volume basis by measuring
brain activity only for several hundred random
images. Reconstruction was also used to identify
the presented image among millions of candidates.
The results suggest that our approach provides an effective
means to read out complex perceptual states
from brain activity while discovering information
representation in multivoxel patterns.
Neuron 60, 915929, December 11, 2008 ª2008 Elsevier Inc.