 Abstract neural representation is often described by the tuning curves of individual neurons with respect to certain stimulus variables. Despite this tradition, it has become increasingly clear that neural tuning can vary substantially in accordance with a collection of internal and external factors. A challenge we are facing is the lack of appropriate methods to accurately capture the moment-to-moment tuning variability directly from the noisy neural responses. Here we introduce an unsupervised statistical approach, Poisson functional principal component analysis, PFPCA, which identifies different sources of systematic tuning fluctuations, moreover encompassing several current models, example multiplicative gain models, or special cases. Applying this method to neural data recorded from MACAC primary visual cortex, a paradigmatic case for which the tuning curve approach has been scientifically essential, we discovered a simple relationship governing the variability of orientation tuning, which unifies different types of gain changes proposed previously. By decomposing the neural tuning variability into interpretable components, our method enables discovery of unexpected structure of the neural code, capturing the influence of the external stimulus drive and internal state simultaneously. This article was authored by Roen J. Bisou and Shweshen Wei.