A reactive artwork that detects facial cues from participants moods in remote sources via mobile app
This installation presents a dynamic time lapse still-life painting that shifts subtly caused by sensing personal characteristics of the participants via a mobile app from remote locations. The body of work explores an autonomously responsive media delivery modified by detected and analyzed biometric data. In the exhibition space, the art works presented are a series of time lapse animated or filmed scenes that change with atmosheric, stylistic or other variable elements.
The video above, demonstrates capture, normalization and detection of a face and the process of emotion recogntion, with the resuting a response based on emotions detected.
The system intelligently analyzes the participants faces, by a camera capture via a mobile app. Based on sensing personal characteristics of the viewer modifies the form and content of the media. This installation seeks to engage the viewer as well as remote participants in an aesthetic experience that is audience controlled.
The system is implemented using backend server applicaion for the analysis of the facial images transmitted from mobile apps. the facial images are analyzed for emotion experience that is responsive to their personal physical attributes and moods.
Technically, the design of such systems requires an inherent intelligence that is ambient,
and ubiquitous – allowing for interpretation of a wide variety of stimuli that can be easily collected. The system's intelligence must have offer a range of options that can be autonomously reactive and give meaningful responses to the visual/sensor cues captured from environment as well as from mobile devices with an active engagement by remote participants.
Our method uses a comibination of vector map and 3D mesh local feature analysis (LFA) of ordered matches and an image processing approach by identifying shapes within a image field using algorithms such as Viola and Jones Open CV Haar-like application,  and a library of facial image (LibSVM.) For emotion detection, we make use of use of a 3D mesh vector template using elastic bunch graph matching that tags facial feature and performs a local feature analysis (LFA) of ordered matches that enable a relative accurate detection basic emotional states, anger, disgust, fear, happiness, sadness, and surprise
No images of the participants are permanently stored in the process. as soon as the system analyzes the participants trans-mitted image the system purged the image. From the analysis the system determines the best guess emotional states.
This work was initially developed a tool for the delivery of marketing data. The system presupposses an information delivery that can be used for advertising, social commun-ication or communications that may be needed in a public or social environment. Extending basis technology, the content or the images and media delivered in such a system can vary widely depending on the installation location, expected crowd and population demographic.
Further work will perfect the algortihm to increase accuracy, and allow capture and detection under a wide variety of environmental factors. Inherrent in this project is the need to make a varied graduated scale of certian emotions, such as anger and happiness.