 When we perceive a stimulus, it is first encoded by our sensory organs and early sensory brain regions. Then, in a process called decoding, our brains interpret the neural signals to create a percept, a mental representation of the stimulus. Perception requires both encoding and decoding, yet the two processes have largely been considered separately. Over the past several decades, efficient encoding and Bayesian decoding have emerged as dominant theories that describe each of these processes, and researchers recently combined these theories into a new, more holistic Bayesian observer model. They then showed that the new model could successfully describe previously unexplained aspects of human perceptual behavior. According to Bayesian decoding, perception is determined by three components, a likelihood function, which describes how accurately sensory representations are encoded, a prior, which describes the observer's expectations regarding the likelihood of a certain stimulus, and the cost associated with making a perceptual error. A shortcoming of current Bayesian observer models is that these components are often somewhat arbitrarily defined by the researchers who apply these models. This is resulted in substantial criticism. In their new Bayesian observer model, the researchers applied the key tenet of efficient encoding to define the prior expectations and likelihood function of the model, allowing them to drastically reduce the number of free model parameters. On one hand, efficient encoding requires that sensory representations are efficient, or in other words, that they are optimally adapted to the characteristics of natural stimuli. On the other hand, Bayesian decoding implies that an observer's prior expectations are also based on the characteristics of natural stimuli. In the new model, the stimulus characteristics therefore determined how the sensory information was both encoded and decoded. The new model also generated two surprising, seemingly anti-Bayesian predictions about perceptual behavior, which the researchers then tested using existing human behavioral data. The first was that perception can be shifted, or biased, away from an observer's expectations. This type of repulsive bias contrasts with the predictions of traditional Bayesian models, but it accurately captures the perceptual biases of human observers asked to make simple judgments about visual stimuli, such as the orientation of lines and the spacing between them. The second was that the influence of stimulus uncertainty on perceptual bias depends on whether the uncertainty results from the stimulus itself or from the sensory representation of the stimulus. Key features of this prediction were also borne out by human perceptual data. By letting the characteristics of the natural environment shape both the encoding and decoding of sensory information, the researchers created a more holistic Bayesian observer model that should apply across all sensory modalities. Future studies will need to determine how accurately the model can be applied to other more realistic perceptual situations and how its computations may be reflected in the underlying neural processing.