MoodModArt – Emota™ v4.0
A reactive artwork that detects facial cues from viewers moods in a gallery space and/or via mobile apps.
On the left is four oil painintg still life panels. On the right is a large screen LCD panel with a dynamic reactive animation fillm loop.
This artwork was on display in the West Mich Redux exhibition in South Haven Center for the Arts, South Haven MI in 2022.
This installation presents a dynamic time lapse still-life painting that shifts subtly caused by sensing personal characteristics of the participants via a mobile app from remote locations. The body of work explores an autonomously responsive media delivery modified by detected and analyzed biometric data. In the exhibition space, the art works presented are a series of time lapse animated or filmed scenes that change with atmosheric, stylistic or other variable elements.
The video above, demonstrates capture, normalization and detection of a face and the process of emotion recogntion, with the resuting a response based on emotions detected.
The system intelligently analyzes the participants faces, by a camera capture via a mobile app. Based on sensing personal characteristics of the viewer modifies the form and content of the media. This installation seeks to engage the viewer as well as remote participants in an aesthetic experience that is audience controlled.
The system is implemented using backend server applicaion for the analysis of the facial images transmitted from mobile apps. the facial images are analyzed for emotion experience that is responsive to their personal physical attributes and moods.
Technically, the design of such systems requires an inherent intelligence that is ambient and ubiquitous – allowing for interpretation of a wide variety of stimuli that can be easily collected. The system's intelligence must have offer a range of options that can be autonomously reactive and give meaningful responses to the visual/sensor cues captured from environment as well as from mobile devices with an active engagement by remote participants.
Our method uses a comibination of vector map and 3D mesh local feature analysis (LFA) of ordered matches and an image processing approach by identifying shapes within a image field using algorithms such as Viola and Jones Open CV Haar-like application[1], [2] and a library of facial image (LibSVM.[3])
For emotion detection, we make use of use of a 3D mesh vector template using elastic bunch graph matching that tags facial feature and performs a local feature analysis (LFA) of ordered matches that enable a relative accurate detection basic emotional states, anger, disgust, fear, happiness, sadness, and surprise. [7]
No images of the participants are permanently stored in the process. as soon as the system analyzes the participants trans-mitted image the system purged the image. From the analysis the system determines the best guess emotional states.
This work was initially developed a tool for the delivery of marketing and advertising in public spaces. The system presupposses an information delivery that be customized, autonomously for communication or communications in public spaces or social environments. Extending the basic technology, the content or the images and media delivered in such a system can vary widely depending on the installation location, expected crowd and population demographic.
Further work perfect the algortihm, increased accuracy and to allow capture and detection of physical cues from audiences or participants. the results can be seeen in the work under the link, "Rgw". This software design detects biometric data for a graduated scale of the seven basic emotion states, as defined by Eckman, et al, in real time.
Collaborators: Pensyl, W. R.; Song Shuli Lily; Dias, Walson; Huang, Walter Yucheng Walter; Fish, Joshua; Animation by Dave Maggio