MoodRing
A social media mobile app that allows participants to communicate their emotional states in a realtime manner to contacts, family and friends.

Personalized experiences can be layered into mobile devices and personal communication devices. As we can detect smiles, frowns, confusion, and anger, content of the device can be altered, to create a fun, interactive experience, which is personally responsive and intelligent. By intersecting through a direct communication between personal peer to peer mobile apps, or social media such as facebook, twitter, myspace, moods can be instantly conveyed to friends and family – when desired by the individual. Individuals can join in “Mood Rings,” that allow them to share only within their selected contacts and friends. This creates a more personalized social media experience. Rings can be created with varying levels of intimacy, from family members, to close friends, out to acquaintances and further to broader groups as well.


The video above, demonstrates capture, normalization and detection of a face and the process of emotion recogntion, with the resuting a response based on emotions detected.

The system intelligently analyzes the participants faces, by a camera capture via a mobile app. Based on sensing personal characteristics of the viewer modifies the form and content of the media. This installation seeks to engage the viewer as well as remote participants in an aesthetic experience that is audience controlled.

The system is implemented using backend server applicaion for the analysis of the facial images

moodR v.01 alpha

within a image field using Viola and Jones Open CV Haar-like features application [1], [2],[3] and a “feret” database [4] of facial image and support vector machine (LibSVM) [3] to classify the capture of the camera view field and identify if a face existd. The system processses the detected faces using an elastic bunch graph mapping technique that is trained to determine facial expressions. These facial expressions are graphed on a sliding scale to match the distance from a target emotion graph, thus giving an approx-imate determination of the users mood.



Further work will perfect the algortihm to increase accuracy, and allow capture and detection under a wide variety of environmental factors. Inherrent in this project is the need to make a varied graduated scale of certian emotions, such as anger and happiness. This would allow for users to be able to communicate varying levels of their moods to those in their rings.

1. Viola, P., & Jones, M. (2001). Robust real-time object detection. Paper presented at the Second International Workshop on Theories of Visual Modelling Learning, Computing, and Sampling

2. Bradski, G. and Kaehler, A., (2008). Learning OpenCV. OReilly.

3. Burges, C. J.C., (1998) A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121-167

4. http://www.nist.gov/
huma nid/colorferet

5. Wiskott, L.; Fellous, J.-M.; Kuiger, N.; von der Malsburg, C. (1997) Face recognition by elastic bunch graph matching, Pattern Analysis and Machine Intelligence, IEEE Transactions on Machine Intelligence, Volume: 19 Issue:7 Pages 775 - 779

6. Bronstein, A. M.; Bronstein, M. M., and Kimmel, R. (2005). "Three-dimensional face recognition". International Journal of Computer Vision (IJCV) 64 (1): 5–30

7. Ekman, P., (1999), "Basic Emotions", in Dalgleish, T; Power, M, Handbook of Cognition and Emotion, Sussex, UK: John Wiley & Sons, http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf