A high performance point of purchase content deleiver system. This project explores the feasibility of a user centric delivery of point-of-purchase information using biometric data capture, intelligent analysis of facial data, height, weight, body type, age, gender and other forms of data that can be directly captured in a non-invasive manner
Such a system must have an inherent intelligence that is ambient, and ubiquitous – allowing for interpretation of a wide variety of stimuli and that can be easily collected. The systems intelligences must have offer a range of options that can be autonomously responsive and give meaningful responses to the visual and sensor cues.
The aim of this research is to study the feasibility of a system which is able to deliver user-aware and user-centric point-of-purchase information. The system must have an inherent intelligence that is ambient and ubiquitous. It must be able to give autonomous and meaningful responses to visual and sensor cues. Such a system prefigures an information delivery for advertising, social communication and even for forms emergency communications that may be needed in a public or social environment.
The video above demonstrates an early stage of the HiPOP project. in this video we can see the abilty to detect gender and smile. This technique currently uses an image processing approach by identifying shapes within an image field using Viola and Jones
Open CV Haar-like features application , , and a “feret” database  of facial image andsupport vector machine (LibSVM)  to classify the faces to glean attributes such as gender, or other individual characteristics. This system will be able to capture information such as facial data, height, weight, gender, race, location, gaze, product label etc. In runtime mode the system Intelligently analyses such data to provide better solution for advertising and social communication. The Image Progressing requires: Segmenting out face rectangles; Scaling to 24*24 grayscale image and Equalising the histogram to increase contrast. An OpenCV library is used to detect and segment faces from video images:1.Using a cascade of boosted classifiers working with haar-like features.2.Training classifiers by a database of face and non-face images.3.Input images are scanned at different scales to find regions that are likely to contain faces.
Gender | Glasses | Smile Detection
We use the SVM classifier method: data points are dealt with as a p-dimensional vector. We use the library LibSVM.
1. Database FERET http://www.nist.gov/humanid/colorferet FA | FB | QR | QL | HL | HR 2. Rate of accuracy FB(dvd2) : 246/268 = 91.791%
1. Database FERET (FA | FB | QR | QL) GENKI WWW (from internet) 2. Rate of Accuracy Database FB(dvd2) : 248/268 = 92.5373 %
1. Database GENKI http://mplab.ucsd.edu, The MPLab GENKI-4K Dataset
2. Rate of Accuracy GENKI File2113.jpg – file2440.jpg : 278/323 = 86.0681% File2000.jpg – file2500.jpg : 417/492 = 84.7561%
3. Result is better when tested with webcams.
Future work: Age Estimation 1. Using PCA to get the age vector( facial features of every age ) 2. Extract the wrinkle features using edge detection and then using classifier to separate people of different ages。 Further work will move towards the use of a 3D mesh vector template using elastic bunch graph matching that tags facial feature and performs a local feature analysis (LFA) of ordered matches, followed by a surface texture analysis (STA) that allows for detection of skin features . The detection of basic emotional states, anger, disgust, fear, happiness, sadness, and surprise. Additional work will focus on personal attribute cues such as hairstyles, clothing, and other cues that allow for estimation of demographic groupings and "tribe" associations.