 We would like to show a method for examining audiovisual prosody perception based on virtual humans. We know that prosody plays a vital role for verbal communication and that it relies on different acoustic cues. However, also visual cues such as eyebrow and head movements give important information and the aim of the present study is to introduce a method that allows video realistic but controllable presentation of audiovisual prosody. This method is based on a realistic communication situation during which we captured the facial and head movements of a talker and these data were then used to operate a virtual human. This is an example how the virtual human looks like based on the expressions of the real talker and the presentation describes the outcome in terms of technical evaluation of eyebrow and head movements. We can conclude that the virtual human shows movements comparable to that of a real talker and importantly the movements of the virtual human are scalable and thus give a valuable tool for examining multimodal prosody perception.