Abstract
Future affective computing research could be enhanced by enabling the computer to recognise a displayer's mental state from an observer's physiological states, and using this information to improve recognition algorithms. In this paper, an observer's physiological signals are analysed to distinguish displayers' genuine from fake smiles. Overall, thirty smile videos were collected from four benchmark database and classified as showing genuine/fake (half/half) smiles. Overall, forty observers viewed videos. We generally recorded four physiological signals: pupillary response (PR), electrocardiogram (ECG), galvanic skin response (GSR), and blood volume pulse (BVP). A number of temporal features were extracted after a few processing steps, and minimally correlated features between genuine and fake smiles were selected using NCCA (canonical correlation analysis with neural network) system. Finally, classification accuracy was found to be as high as 98.8% from PR features using a leave-one-observer-out process. In comparison, the best current image processing technique [1] on the same video data was 95% correct. Observers were 59% (on average) to 90% (by voting) correct by their conscious choices. Our results demonstrate that humans can non-consciously recognise the quality of smiles 4% better than current image processing techniques and 9% better than conscious choices of groups.
Original language | English |
---|---|
Journal | IEEE Transactions on Affective Computing |
DOIs | |
Publication status | Accepted/In press - Jan 1 2018 |
Keywords
- Affective Computing
- Databases
- Electrocardiography
- Face
- Fake Smile
- Feature extraction
- Genuine Smile
- Observers
- Physiological Signals
- Physiology
- Temporal Features
- Videos
ASJC Scopus subject areas
- Software
- Human-Computer Interaction