A reactive artwork with subtle response to viewers gender, age and mood.
This installation presents a dynamic time lapse still-life painting that shifts subtly caused by sensing personal characteristics of the viewer in the exhibition space.
The body of work explores the “subtle presence” of an autonomously responsive media delivery using biometric data. In the exhibition space, the system intelligently analyzes the viewers face, age and gender by a non-invasive video capture and based on sensing personal characteristics of the viewer modifies the form and content of the media. This installation seeks to engage the viewer in an aesthetic experience that is responsive to their personal physical attributes and moods.
The video above demonstrates an early stage of the HiPOP project. in this video we can see the abilty to detect gender and smile. This technique currently uses an image processing approach by identifying shapes within a image field using Viola and Jones Open CV Haar-like features application , , and a “feret” database  of facial images and
support vector machine (LibSVM)  to classify the faces to glean attributes such as gender, or other individual characteristics. In a multilayered strategy, media content can be pushed from a number of media streams, each targeted by a emotion state detected.
The viewer enters the field of the camera, invokes the face search. Once a face is detected, the system begins to compare the face to database of image with the seven basic moods. If the modd is detected as "pleasant," the longer the viewer display a happy disposition, the imagery in the projected art work become more vibrant and colorful. If the mood detected is not "pleasant," the imagery becomes dull and lifeless.
Such a system must have an inherent intelligence that is ambient, and ubiquitous – allowing for interpretation of a wide variety of stimuli and that can be easily collected. The systems intelligences must have offer a range of options that can be autonomously responsive and give meaningful responses to the visual and sensor cues.
Building on previous work that explores the feasibility of a user centric delivery of point-of-purchase marketing content using biometric data capture. This system uses facial recognition to transmit facial images of persons from remotes sources such as mobile environments or from local sources within and exhibition space. Through the intelligent analysis of facial data, physical cues, age, gender and other forms of data that can be directly captured in a non-invasive manner. Following analysis, the embedded maketing message can be focused to the individual based on the cues collected.
Further work will perfect the algortihm to increase accuracy, and allow capture and detection under a wide variety of environmental factors. Inherrent in this project is the need to make a varied graduated scale of certian emotions, such as anger and happiness.