TY - JOUR
T1 - Using Temporal Features of Observers' Physiological Measures to Distinguish between Genuine and Fake Smiles
AU - Hossain, Md Zakir
AU - Gedeon, Tom
AU - Sankaranarayana, Ramesh
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - Future affective computing research could be enhanced by enabling the computer to recognise a displayer's mental state from an observer's reaction (measured by physiological signals), using this information to improve recognition algorithms, and eventually to computer systems which are more responsive to human emotions. In this paper, an observer's physiological signals are analysed to distinguish displayers' genuine from fake smiles. Overall, thirty smile videos were collected from four benchmark database and classified as showing genuine or fake smiles. Overall, forty observers viewed videos. We generally recorded four physiological signals: pupillary response (PR), electrocardiogram (ECG), galvanic skin response (GSR), and blood volume pulse (BVP). A number of temporal features were extracted after a few processing steps, and minimally correlated features between genuine and fake smiles were selected using the NCCA (canonical correlation analysis with neural network) system. Finally, classification accuracy was found to be as high as 98.8 percent from PR features using a leave-one-observer-out process. In comparison, the best current image processing technique [1] on the same video data was 95 percent correct. Observers were 59 percent (on average) to 90 percent (by voting) correct by their conscious choices. Our results demonstrate that humans can non-consciously (or emotionally) recognise the quality of smiles 4 percent better than current image processing techniques and 9 percent better than the conscious choices of groups.
AB - Future affective computing research could be enhanced by enabling the computer to recognise a displayer's mental state from an observer's reaction (measured by physiological signals), using this information to improve recognition algorithms, and eventually to computer systems which are more responsive to human emotions. In this paper, an observer's physiological signals are analysed to distinguish displayers' genuine from fake smiles. Overall, thirty smile videos were collected from four benchmark database and classified as showing genuine or fake smiles. Overall, forty observers viewed videos. We generally recorded four physiological signals: pupillary response (PR), electrocardiogram (ECG), galvanic skin response (GSR), and blood volume pulse (BVP). A number of temporal features were extracted after a few processing steps, and minimally correlated features between genuine and fake smiles were selected using the NCCA (canonical correlation analysis with neural network) system. Finally, classification accuracy was found to be as high as 98.8 percent from PR features using a leave-one-observer-out process. In comparison, the best current image processing technique [1] on the same video data was 95 percent correct. Observers were 59 percent (on average) to 90 percent (by voting) correct by their conscious choices. Our results demonstrate that humans can non-consciously (or emotionally) recognise the quality of smiles 4 percent better than current image processing techniques and 9 percent better than the conscious choices of groups.
KW - Affective computing
KW - fake smile
KW - genuine smile
KW - physiological signals
KW - temporal features
UR - http://www.scopus.com/inward/record.url?scp=85055709183&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055709183&partnerID=8YFLogxK
U2 - 10.1109/TAFFC.2018.2878029
DO - 10.1109/TAFFC.2018.2878029
M3 - Article
AN - SCOPUS:85055709183
SN - 1949-3045
VL - 11
SP - 178
EP - 188
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
IS - 1
M1 - 8509156
ER -