Background
Tools to quantify layperson assessments of facial palsy are lacking. In this study, artificial intelligence was applied to develop a proxy for layperson assessments, and compare sensitivity to existing outcome measures.
Methods
Artificially intelligent emotion detection software was used to develop the emotionality quotient. The emotionality quotient was defined as the percentage probability of perceived joy over the percentage probability of perceived negative emotions during smiling, as predicted by the software. The emotionality quotient was used to analyze the emotionality of voluntary smiles of normal subjects and unilateral facial palsy patients before and after smile reanimation. The emotionality quotient was compared to oral commissure excursion and layperson assessments of facial palsy patients.
Results
In voluntary smiles of 10 normal subjects, 100 percent joy and no negative emotion was detected (interquartile ranges, 0/1). Median preoperative emotionality quotient of 30 facial palsy patients was 15/-60 (interquartile range, 73/62). Postoperatively, median emotionality quotient was 84/0 (interquartile range, 28/5). In 134 smile reanimation patients, no correlation was found between postoperative oral commissure excursion and emotionality quotient score. However, in 61 preoperative patients, a moderate correlation was found between layperson-assessed disfigurement and negative emotion perception (correlation coefficient, 0.516; p < 0.001).
Conclusions
Computer vision artificial intelligence software detected less joy and more negative emotion in smiles of facial palsy patients compared with normal subjects. Following smile reanimation, significantly more joy and less negative emotion were detected. The emotionality quotient was correlated with layperson assessments. The simplicity, sensitivity, and objectivity of the emotionality quotient render it an attractive tool to serve as a potential proxy for layperson assessment, an ideal outcome measure in facial palsy.