 In our everyday lives, we're often faced with the challenge of understanding speech when other sounds are present. We know that in these situations people get a benefit of knowing where to listen in advance, in other words from directing spatial attention before someone begins to speak. So if you imagine a typical experimental situation where the listener is in the centre of an array of loudspeakers, they hear three different phrases from different locations in front of them. We can tell them in advance which talk they should selectively attend to by presenting a visual cue, for example a leftward pointing arrow. In these situations attention doesn't appear to be all or none, instead preparatory attention appears to build up over time. For example reaction times progressively improve as an instructional cue is presented longer in advance of the target talker, which we call the cue target interval. Also EEG activity gradually increases in amplitude before the target talker starts speaking. And here we wanted to know what are the computational processes underlying these effects. So we modelled this using active inference, which is an extension of predictive coding based on the idea that our brains actively predict the speech signal based on an underlying generative model. We developed a new generative model of selective attention during cocktail party listening, and we treat cocktail party listening as a Bayesian inference problem. We use this model to test competing hypotheses about how time-sensitive changes in the model affect simulated reaction times in EEG responses. We compared these simulations to empirical data recorded from human participants. Interestingly we found evidence that the behavioural and EEG effects are underpinned by separate computational processes. So if you want to find out more about this work then please feel free to watch my talk. You'll be able to learn more about how the model works, discover the different processes underlying the behavioural and EEG effects, and also see simulated EEG activity under the model. Thanks for listening.