Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 7, 2008
Hubert Shum, Taku Komura and Shuntaro Yamazaki
Efficient computation of strategic movements is essential to control virtual avatars intelligently in computer games and 3D virtual environments. Such a module is needed to control non-player characters (NPCs) to ﬁght, play team sports or move through a mass crowd. Reinforcement learning is an approach to achieve real-time optimal control. However, the huge state space of human interactions makes it difﬁcult to apply existing learning methods to control avatars when they have dense interactions with other characters. In this research, we propose a new methodology to efﬁciently plan the movements of an avatar interacting with another. We make use of the fact that the subspace of meaningful interactions is much smaller than the whole state space of two avatars. We efﬁciently collect samples by exploring the subspace where dense interactions between the avatars occur and favor samples that have high connectivity with the other samples. Using the collected samples, a ﬁnite state machine (FSM) called Interaction Graph is composed. At run-time, we compute the optimal action of each avatar by min-max search or dynamic programming on the Interaction Graph. The methodology is applicable to control NPCs in ﬁghting and ball-sports games.