TigerTraining
A mixed reality installation using positioning and hand gesture and voice for control of virtual characters.
“Tiger Training” explores the relationship between real and virtual worlds via interaction with virtual animals in an immersive mixed reality environment. This interactive, audience participative installation encourages viewers to “train” virtual animals using hand signals and voice commands.
Virtual tigers are not easily “trained” requiring learning appropriate ways to coax responsive actions. The video below is a demonstration of in progress installation. The larger image on right is what the viewer sees through the headmount display.
The installation is virtually embedded in real world environments where viewers can freely move in the real and virtual worlds. The mixed reality virtual animals are viewed via a headset that is tracked in real time. The viewer's hand gestures and voice are captured to create natural interactive "training" of virtual animals.
The installation is a demontration of in–progress research in real–time stable marker-less tracking techniques currently underway in the Interaction and Entertainment Research Center. This first installation places virtual tigers in proximity with the viewer in such way that allows the viewer to move around and view from multiple angles and use hand and voice signal to coax responses form the virtual animals.
The sample images here are from current in progress installation, “Tiger Training." This installation uses mixed reality and game technologies creating audience participative immersive interaction with virtual animals. The experience is designed to elicit responsive interactions and cooperation between real persons
and virtual animals. Expanding on ideas of virtual pets, the piece creates natural interactions between virtual animals and humans. Recent work focuses on integration of virtual “human–like” and “animal–like” agents that naturally interact with real world participants in mixed and augmented reality environments. Developed upon existing and ongoing research into intelligence in virtual agents as characters in games, interlocutors in theatre or simulation there is potential for co–evolution of narratives. Agents with modest abilities to sense, and interpret symbolically actions of participants allow autonomous interaction between the humans and virtual animal–like agents within a framework of a mixed reality performance. This “location–based entertainment” form allows narratives, animation, and cinematic presentations to occur in real–world locations. The form is interactive, audience participative and moves away from passive entertainment, placing viewers within the ”fourth wall,” immersing them into experiential performances. Wearing head mounted display systems observers freely walk around environments and view animal–like agents from various angles. Through voice and gesture recognition, interactions are natural and integrate autonomous characters that respond and interact with non–ascribed behaviours in mutually inclusive and affective dialogues through narrative paths without predetermined outcomes. The installation is a somewhat humorous look at "animal training" and applies this to the “training” of virtual animals.
Through gesture and voice recognition prompts given by the viewers can be used to coax responsive interactions from these virtual animals. Inherent in this training is an underlying question as to who is really being trained in such circumstances. As we see in animal training shows in animal parks humans inevitably alter their behaviours to elicit the response from the animals. These behaviours are unnatural and belie the truth that the trainer is being trained as much as the animals.
Additional motions for the tigers reactions evoked by gesture recognition.
These animation behaviours are repurposed for the multi–screen display system at the Boston Convention and Exhibition Center, displyed at: