=Paper= {{Paper |id=Vol-2870/paper95 |storemode=property |title=Approach to Recognizing of Visualized Human Emotions for Marketing Decision Making Systems |pdfUrl=https://ceur-ws.org/Vol-2870/paper95.pdf |volume=Vol-2870 |authors=Iryna Spivak,Svitlana Krepych,Oleksandr Fedorov,Serhii Spivak |dblpUrl=https://dblp.org/rec/conf/colins/SpivakKFS21 }} ==Approach to Recognizing of Visualized Human Emotions for Marketing Decision Making Systems== https://ceur-ws.org/Vol-2870/paper95.pdf
Approach to Recognizing of Visualized Human Emotions for
Marketing Decision Making Systems
Iryna Spivaka, Svitlana Krepycha, Oleksandr Fedorova and Serhii Spivakb
a
    West Ukrainian National University, Lvivska str. 11, Ternopil, 46009, Ukraine
b
    Ternopil Ivan Puluj National Technical University, Ruska str. 56, Ternopil, 46001, Ukraine


                  Abstract
                  The article proposes an approach to the recognition of visualized human emotions for
                  marketing decision-making systems. The analysis of previous studies has shown the relevance
                  and expediency of the proposed approach, as it will reduce the use of computing resources to
                  implement the recognition process, and at the same time, increase the speed of obtaining the
                  result. The article presents an algorithm for step-by-step identification of visualized human
                  emotion based on the comparison of changes in the positions of key points of the selected
                  element in accordance with changes in the characteristics of this element.

                  Keywords 1
                  Recognition, Visualized Human Emotions, Pixel, Color Model, Marketing Decision.

1. Introduction

    Trends in any commercial activity show that making marketing decisions that will provide the greatest
impact on the consumer in making decisions to maximize profits are relevant today. In recent years, in the
area of marketing, the subject of intensive research has become nonverbal information, namely the study
of facial expressions. It is known from psychology that all human emotions can be classified into six basic
emotions, which are most used to obtain nonverbal information. The ability to automatically recognize this
kind of information will simplify the interpretation of emotions on a person's face while watching
advertising, product testing or using the service.
    The proposed approach will help to understand whether the consumer really liked the product, what
color, size or smell he prefers, etc., as the survey can often get inaccurate information. The proposed
approach will allow you to see the informal reaction of users, which will help to understand what necessary
to focus on and what to improve.

2. Related works

    The development of emotion recognition in the vast majority of methods occurs in three steps [1]. In the
first step, functions are defined from fixed images, and in the second step, emotions are detected with the
help of already developed classifiers and in the third step is face recognition itself. The most common are
Local Binary Patters (LBP) [2] – is a description of the pixels around the central pixel in binary form. The

COLINS-2021: 5th International Conference on Computational Linguistics and Intelligent Systems, April 22–23, 2021, Kharkiv, Ukraine
EMAIL: spivak.iruna@gmail.com (I. Spivak); msya220189@gmail.com (S. Krepych); fedorov.oleks@gmail.com (O. Fedorov);
spivaksm@ukr.net (S. Spivak)
ORCID: 0000-0003-4831-0780 (I. Spivak); 0000-0001-7700-8367 (S. Krepych); 0000-0002-8080-9306 (O. Fedorov); 0000-0002-7160-2151
(S. Spivak)
            ©️ 2021 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
LBP operator is applied to the central pixel of the image, it uses 8 pixels that are around it taking the central
pixel as the main. The main disadvantage of this method is that the image needs high-quality preprocessing
due to high sensitivity to noises.




Figure 1: Example of an applied LBP pattern to the image

   All existing face recognition methods have their pros and cons, let’s have a look at the main
disadvantages of each of the methods.
   Geometric approach to face recognition – is one of the first developed methods it consists of choosing
the key points of the face such as lips, center of the eye, etc. This method does not require an expensive
equipment but it affects the low reliability of the method.
   Disadvantages:
       Low reliability;
       Hight lighting requirements;
       Mandatory frontal image of the person;
       Does not take into account the possibility of changing facial expressions.




Figure 2: Face distribution using the geometrical method

   Viola-Jones method and Haar features [3, 4, 13, 15] - is the most popular method of finding the facial
area in images because of its relatively high speed and efficiency. Face recognition in this method is based
on three basic principles:
        Integral representation of the image on the basis of Haar, which allows you to calculate the
         necessary features;
      Classifier construction method based on adaptive boosting algorithm (AdaBoost);
      Method of combining classifiers into a cascade structure.
   Disadvantages:
      At an angle of 30 * or more the probability of recognition drops rapidly;
      Makes it impossible to detect a person at an arbitrary angle;
      Takes a lot of training time;
      Sensitive to lighting.




Figure 3: Haar feature used for Viola Jones face recognition method

   Active appearance model (AAM) - are statistical models of images that can be adjusted to the real image
by various deformations. Fitting the model to a specific image of the face is performed in the process of
solving the optimization problem, the essence of which is to minimize the functionality.




Figure 4: The process of realization of active appearance model

   However, these approaches require a lot of time and resources for training, which limits their use in a
large sample of input data [7-9, 11, 12]. The article proposes an approach to recognizing the emotions of
the human face by definition and changing the positions of key points of the eyes, mouth and eyebrows,
which does not require large computational resources for its implementation.

3. Overview of the Research

   Emotion is one of the basic elements of the human psyche. A human's emotions are distinguished
depending on the state of satisfaction of his needs. They can be both positive and negative, as well as
neutral, when a person doesn't react in any way and remains in its original state. For example, looking the
advertising - some people feel anger or disgust, while others - pleasure and interest. A person's behavior
also changes depending on what emotions the person is experiencing.
   When making marketing decisions, the automated system must be able to identify and recognize one of
the six basic human emotions:
        Surprise - a short-term feeling and state of a person that occurs during a sudden and unexpected
   situation.
        Fear is a state of anxiety and restlessness caused by the expectation of something undesirable or
   unpleasant.
        Disgust - a feeling of disapproval towards someone or something.
        Angry is a strong feeling of dissatisfaction that arises when a person's needs or expectations have
   not satisfied.
        Happy is a feeling of satisfaction that arises when a person's needs or expectations have been met.
        Sad is a feeling opposite to happy, which arises in case of loss and helplessness of a person.




Figure 5: Detection of emotions on a human's face

   Figure 5 on the left shows the face of a man in his calm (neutral) state. To detect emotions, it is first
necessary to determine the area of a person's eyes and mouth in its neutral state. The neutral state is
characterized by quantitative indicators that do not belong to either positive or negative emotions. This is
better seen in the example of the eyes on Figure 1. In a neutral state, a person's eyes are not as open or
squinted as in other emotions. The situation is similar with other key points of the face.
   In the future, depending on the emotion reflected on the face, this area will increase or decrease. For
example, let's take two emotions: disgust and happy, which are very similar when identifying key points.
In both, the oral area increased and the eye area decreased. Once the program determines this, the search
for the next key points will be narrowed only to these two emotions, and will not pass all six, which will
save time to identify the required final emotion, in contrast to existing methods and approaches [5, 6, 14].

4. Proposed approach

   Our performed analysis of this issue showed that to determine a human's emotions it is enough to choose
the key elements on the face, namely the eyes, eyebrows, nose and mouth, and not to identify the whole
face. The algorithm of proposed approach includes the following steps:
   Step 1. Determining a face image from a photo or video and convert it into black and white using the
capabilities of the CSS-filter function, which can be implemented on any PC hardware, as it doesn't require
large resources.
   Step 2. Selection of key elements of the face and their processing in the HSL (Hue Saturation
Luminance) color model (Fig. 6), where Hue - means both color and hue; Saturation - indicates the amount
of gray color; Luminance is the intensity of light projected on a given area and direction.




Figure 6: The selected pixel area

    Step 3. Each pixel of the photo is replaced by its numerical value (the darker the hue of black, the smaller
the number and vice versa) (Fig. 7). These numerical values will be used to search for key points of the
selected facial elements based on the algorithm for finding the nearest neighbor, namely the hue of black.
When the hue of darkness decreases by more than 15% (perhaps more, its need to check by the software) -
this means that the next pixel (depending on its direction) doesn't need to be estimated. This will give us
clear contour of the element we are estimating.




Figure 7: Numerical matrix of pixel values of the selected area

  Step 4. Based on the analysis of changes in the positions of key points performing to identify human
emotions.
  Figure 8 shows a diagram of the implementation of the proposed method, which includes three main
modules: image determination, conversion and identification.

                                               Module                                  Module
                                          "Image Conversion"                       "Identification"
       Photo           Video
                                          Convert to black and                   Identification of key
                                             white format                                points



               Face                      Processing in the color
                                                                                   Calculation and
           determination                         model
                                                                                    classification


                                            Construction of a
                                           numerical matrix of                 Identification of emotion
                                                 pixels

Figure 8: The scheme of implementation of the proposed approach

5. Results & Discussion
   Based on the emotion of fear, we will show how a human's face will change, and how software will be
able to detect it. For example, from the fixed image of the face the element "eye" for research is determinate.




                            a)                                            b)
Figure 9: Images of eye expansion in the color model: a) in a neutral human state; b) in a state of fear

    Figure 9a) shows the image of the human eye in a neutral state, while Figure 9b) - the expansion of the
eye during the action of the emotion of fear. It is known that when a person is afraid, his eyes expand and
their area increases accordingly. Counting the number of pixels from the extreme left point (1) to the
extreme right (2) we get 31 pixels in the neutral state and 32 pixels in the fear state. The difference of 1
pixel is not significant, because during fear the eyes cannot increase in width, only in height. Counting the
number of pixels from the uppermost point (3) to the lowermost point (4) we get 17 pixels in the neutral
state and 23 pixels in the fear state. Now that we have this data, we can calculate the area of the eyes, which
will show us its increase during the action of the emotion of fear.
    The next key point is the eyebrows.
                         a)                                                b)
     Figure 10: Images of eyebrow in the color model: a) in the neutral human state; b) in a state of fear

    Figure 10a) shows the eyebrows and eyes of a human in a neutral state, and Figure 10b) - under the
influence of fear. As we can see in a neutral emotional state, the distance between the eyebrows (from point
2 to point 1a) is 37 pixels, the distance between the eyebrows in a state of fear - 32 pixels. You should also
pay attention to the distance from the eyes to the eyebrows. In the neutral state, the distance for the left
eyebrow is 13 pixels and for the right 8. In a frightened person, this distance is 4 pixels for the right and
left eyebrows.
    The last key point in recognizing the emotion of fear is the corners of the lips.




                           a)                                                    b)
Figure 11: Image of lips in the color model: a) in the neutral human state; b) in a state of fear

   Figure 11a) shows the area of the human mouth in a neutral state and it is 21 pixels. In the state of fear
person opens his mouth and, accordingly, changes its size. In our case it is 28 pixels (Fig.11b).
   According to the study, we can generate table 1, which contains data on the change of distance in pixels
between the extreme points of the selected elements of the face to facilitate the identification of the image
of emotion.

Table 1
Change of key points positions according to changes of the characteristics of the selected element
    Emotion                Characteristic                    Change key points of elements
    Neutral      1. when a person does not 1.1. Position of a face key elements are in their usual
                 react in any way and remains position
                 in its original state
    Surprise     1. dilated eyes                  1.1. The distance in pixels increases from the
                 2. raised eyebrows               extreme points of the eye.
                 3. open mouth (extended)         2.1. The distance in pixels from the extreme lower
                                                  point of the eyebrow to the extreme upper point of
                                                  the eye increases.
                                   3.1. The distance from the extreme point of the
                                   lower lip to the middle point of the upper lip
                                   increases.
                                   3.2. The distance from the extreme left and right
                                   points of the mouth decreases.
 Fear     1. dilated eyes          1.1 The distance in pixels increases from the
          2. raised eyebrows       extreme points of the eye.
          3. wide open mouth       2.1. The distance in pixels from the extreme lower
                                   point of the eyebrow to the extreme upper point of
                                   the eye increases.
                                   3.1. The distance from the extreme point of the
                                   lower lip to the middle point of the upper lip
                                   increases.
                                   3.2. The distance from the extreme left and right
                                   points of the mouth does not change.
Disgust   1. dilated eyes          1.1. The distance in pixels increases from the
          2. raised eyebrows       extreme points of the eye.
          3. clenched mouth        2.1. The distance in pixels from the lower extreme
                                   point of the eyebrow to the upper extreme point of
                                   the eye increases.
                                   3.1. The distance from the extreme point of the
                                   lower lip to the middle point of the upper lip
                                   decreases.
Angry      1. squinted eyes        1.1. The distance in pixels decreases from the
           2. lowered and shifted extreme points of the eye.
           eyebrows                2.1. The distance in pixels from the extreme lower
           3. open mouth           point of the eyebrow to the extreme upper point of
                                   the eye decreases.
                                   2.2. The distance from the extreme left point of one
                                   eyebrow to the extreme right point of the second
                                   eyebrow decreases.
                                   3.1. The distance from the extreme point of the
                                   lower lip to the middle point of the upper lip
                                   increases.
Happy      1. eyes are neutral     1.1. The distance in pixels from the extreme points
           2. eyebrows are neutral of the eye does not change significantly
           3. slightly open mouth  2.1. The distance in pixels from the eyebrow to the
                                   eye does not change significantly
                                   3.1. The distance from the extreme point of the
                                   lower lip to the middle point of the upper lip
                                   increases.
 Sad      1. squinted eyes         1.1. The distance in pixels decreases from the
          2. lowered and shifted   extreme points of the eye.
              eyebrows             2.1. The distance in pixels from the extreme lower
          3. clenched mouth        point of the eyebrow to the extreme upper point of
                                   the eye decreases.
                                                     2.2. The distance from the extreme left point of one
                                                     eyebrow to the extreme right point of the second
                                                     eyebrow decreases.
                                                     3.1. The distance from the extreme point of the
                                                     lower lip to the middle point of the upper lip
                                                     decreases.

    According to the data in Table 1, the programing search for the emotion fixed in the image will be as
follows. First, for example, choose one element "eyes" and check it for conformity with the characteristics
of emotions when they are detected (Fig.12).




Figure 12: Identification of emotions according to the characteristics of the eyes

   The figure shows that the further search will continue in one of three directions if eyes are dilated (Fear,
Surprise or Disgust) or (Angry or Sad) if they are squinted, and for the last if eyes are unchanged (Happy
or Neutral Emotion) can be applied. The next steps are to cut off unnecessary emotions by checking other
characteristics. To determine the change in the position of key points of the eyes, eyebrows and mouth, the
ranges of these changes should be set in the interval forms [10], which will take into account the
physiological characteristics of human faces.

6. Conclusions
   Research in the field of psychology has shown that the emotional state of all people has common external
features. This made it possible to develop a universal classifier of emotions, which can be used to determine
person’s state. The article proposes an approach to the recognition of visualized human emotions using a
pixel color model, which has the ability to adapt to changes in input data less time consuming compared to
other existing methods, high speed and low resource usage.
   The proposed approach has practical value in marketing decision-making systems based on the analysis
of a human's emotional state while viewing or testing a particular product or service. The article presents
an algorithm for step-by-step identification of visualized human emotion based on the comparison of
changes in the positions of key points of the selected element in accordance with changes in the
characteristics of this element.
    Further research will focus on the development of automated methods and algorithms for recognizing
human emotions, taking into account the physiological characteristics of the human face, gender, age and
more. This consideration is necessary and appropriate, because the physiological features of the facial
structure of men and women are different: the location of the eyebrows, their width, the shape of the nose,
the shape of the lips and their thickness.

7. References

[1] S. Minaee, M. Minaei, A. Abdolrashidi, Deep-Emotion: Facial Expression Recognition Using
     Attentional Convolutional Network, Sensors, 2021, 21, 3046. doi:10.3390/s21093046.
[2] S. Caifeng, S. Gong, P. McOwan, Facial expression recognition based on local binary patterns: A
     comprehensive study, Image and vision Computing 27.6 (2009): 803-816.
[3] M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, J. Movellan, Recognizing facial
     expression: machine learning and application to spontaneous behavior, in: Proceedings of IEEE
     Computer Society Conference on Computer Vision and Pattern Recognition, CVPR ’05, San Diego,
     CA, USA, 2005, pp. 568-573. doi: 10.1109/CVPR.2005.297.
[4] I. Spivak, S. Krepych, V. Faifura, S. Spivak, Methods and tools of face recognition for the marketing
     decision making, in: Proceedings of IEEE International Scientific-Practical Conference: Problems of
     Infocommunications Science and Technology, PICS&T ‘19, Kyiv, Ukraine, 2019, pp. 212–216.
     doi:10.1109/PICST47496.2019.9061229.
[5] A. Kovalenko, The system of recognition of facial expressions of human emotions using a multilayer
     perceptron, Visn. Nat. Lviv Polytechnic University (2011) 76-81.
[6] G. Yefimov, Modeling and recognition of facial expressions of emotions on a person's face, Artificial
     Intelligence, vol. 3 (2009) 532-542.
[7] T. Baltrusaitis, P. Robinson, L. Morency, OpenFace: An open source facial behavior analysis toolkit,
     in: Proceedings of IEEE Winter Conference on Applications of Computer Vision, WACV ’16, Lake
     Placid, NY, USA, 2016, pp. 167-171. doi:10.1109/WACV.2016.7477553.
[8] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception architecture for
     computer vision, in: Proceedings of IEEE Computer Society Conference on Computer Vision and
     Pattern Recognition, CVPR ’16, Las Vegas, NV, USA, 2016, pp. 2818–2826.
     doi: 10.1109/CVPR.2016.308.
[9] K. He, X. Zhang, Sh. Ren, J. Sun, Deep residual learning for image recognition in: Proceedings of
     IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR ’16, Las
     Vegas, NV, USA, 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
[10] I. Spivak, S. Krepych, S. Budenchuk, Methods and means of expert evaluation of software systems on
     the basis of interval data analysis, in: Proceedings of 14th International Conference on Advanced
     Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET ’18, Lviv-
     Slavske, Ukraine, 2018, pp. 101-127. doi: 10.1109/TCSET.2018.8336178.
[11] N. Mehendale, Facial emotion recognition using convolutional neural networks (FERC), SN Applied
     Sciences (2020). doi: 10.1007/s42452-020-2234-1.
[12] V. Chirra, S. Uyyala, V. Kolli, Facial Emotion Recognition Using NLPCA and SVM, Traitement du
     Signal (2019) 13-22. doi: 10.18280/ts.360102.
[13] Y. Kuldeep, S. Joyeeta, Facial expression recognition using modified Viola-John’s algorithm and
     KNN classifier, Multimedia Tools and Applications (2020). doi: 10.1007/s11042-019-08443-x.
[14] M. Uddin, M. Hassan, A. Almogren, A. Alamri, M. Alrubaian, G. Fortino, Facial expression
     recognition utilizing local direction-based robust features and deep belief network, IEEE Access, 2017,
     pp. 4525-4536. doi: 10.1109/ACCESS.2017.2676238.
[15] S. Adeshina, H. Ibrahim, S. Teoh, S. Hoo, Custom Face Classification Model for Classroom Using
     Haar-Like and LBP Features with Their Performance Comparison, Electronics 2021.
     doi: 10.3390/electronics10020102.