Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Nov 23, 2009
This video demonstrates how embodied experience is used to enable an AI to correctly resolve references in natural language (in this case, to tell what object an instance of the word "it" refers to).
The video was screen-captured from actual real-time interaction between a human and an AI in the Multiverse virtual world.
The virtual dog is controlled by the AI; the human avatar is controlled by a human player.
While a relatively simple example, it involves integration of a natural language engine (RelEx), a cognition engine (the OpenCog-based OpenPetBrain), and a virtual world engine (Multiverse).
The text box that sometimes shows at the bottom left of the video contains the text typed by the human player, to communicate with the AI dog. The thought bubbles and captions shown elsewhere in the video, were added to the video after it was recorded, to better explain what is going on.