<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>OpenFaceR: Developing an R Package for the convenient analysis of OpenFace facial information12.</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>nis O'Hor</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University of Galway</institution>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>OpenFace is an open source tool designed to extract the most commonly used facial information from videos including facial points, head pose, gaze and Facial Action Units. OpenFaceR is a tool designed to help social scientists, and other researchers from less technical disciplines, who are interested in facial nonverbal behaviors (FNVBs), to easily use output from OpenFace 2.0. The output from OpenFace is one csv file for each video, with information on each feature for each frame of the analyzed video provided in rows. OpenFaceR constitutes a set of methods to convert information in this format into relevant summary statistics. In this paper, we focus on the set of methods in OpenFaceR to extract information from a series of videos and transform the output files into a single dataset in which each row reports the summary values of a feature for one video.</p>
      </abstract>
      <kwd-group>
        <kwd>OpenFace</kwd>
        <kwd>R</kwd>
        <kwd>Nonverbal Behaviors</kwd>
        <kwd>Face</kwd>
        <kwd>Computer Vision</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Humans are social animals, capable of complex and variable behaviour. The face is a
central element of human sociality[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], since it provides rich information for immediate
social judgments through static cues (e.g. biometrics, skin colour, feminine/masculine
features, regional traits etc..) and dynamic cues (e.g. smiles, blinks, gaze, emotion
expression etc…). The latter, also called Facial Nonverbal Behaviors (FNVBs), have been
widely studied in many fields such as display of emotions [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], lie detection [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
interpersonal relations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and personality recognition [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Research on FNVBs can be
further divided into two streams which require different techniques of data collection
and data analysis. The first one is concerned with how facial expressions change in time
within subjects (e.g. studies on mimicry or on emotional reactions [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]) and therefore
require FNVBs data per each temporal unit. The second stream is concerned about how
FNVBs differ between individuals (e.g. nonverbal expression of personality [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]) or
within the same individuals in different conditions (e.g. being ingenuous vs being
1 Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License
      </p>
      <p>
        Attribution 4.0 International (CC BY 4.0).
2 The work has been funded by the Irish Research Council
deceitful [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]). In this second case, that is the focus of this paper, the analysis is
performed on summary measures of FNVBs, such as their frequency.
      </p>
      <p>
        FNVBs are traditionally annotated manually, through one of the many existing
scales (ex: Riverside Q-Sort [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]; Münster Behavior coding system [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]). One of the most
popular is the Facial Actions Coding System (FACS) by Paul Ekman [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which
analyses the smallest independent movements of the facial muscles, called Action Units
(AUs). The FACS provides with a detailed and objective approach to the classification
of FNVBs, but manual annotation of AUs is a demanding job which requires
considerable amount of time of well-trained observers.[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Recently, progresses in computer
vision has allowed the development of software for automatic analysis and recognition
of facial static and dynamic characteristics [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Amongst those OpenFace, an
opensource software developed at Cambridge University by Baltrusaitis and colleagues [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ],
is one of the most used in the social sciences, with 753 citations by August 16, 2020.
OpenFaceR, the GitHub repository presented in this paper, includes a set of R functions
intended to facilitate the use of OpenFace 2.0 for social scientists.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>OpenFace</title>
      <p>
        The major goal of OpenFace is to provide a comprehensive, open source and free tool
for describing facial behaviors [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. OpenFace estimates the status of four different
types of feature: facial landmarks; head pose; eye gaze; and facial expressions. The x,
y and z position of 67 facial landmarks are identified using a Convolutional Experts
Constrained Local Model [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Based on these values, head pose is estimated through
the orthographic projection of an internal 3D representation of the facial landmarks
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. To estimate the direction of eye gaze, OpenFace first uses a Constrained Local
Neural Field to detect eyelids, pupils and iris. Then, an eyeball model and head pose
information are incorporated in a complex process to estimate gaze direction [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
Finally, OpenFace makes use of a linear kernel Support Vector approach to describe18
AUs [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] (e.g., movement of the lip corner puller, the muscle we use to smile). For each
AU it estimates its intensity (e.g. a number between 0 and 1 describing how much the
lip corner puller is contracted) and its presence (e.g,, if the movement of the lip corner
puller is large enough for being observed as a smile). OpenFace 2.0 has been tested
through two different datasets achieving state of the art performance, despite
comparatively low computational demands [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The software can be run through the command
prompt to analyse a single video or multiple videos stored in a folder. The output, for
each video, is a Comma Separated Values (CSV) file including 538 values for each
frame:
- frame number
- timestamp
- confidence (how accurate the analysis of the frame is likely to be) and success
(whether confidence is high enough)
- x, y and z coordinates of the gaze for each eye
- z and y polar coordinates of the gaze angle
- 56 by 2 (x and y) 2D eye landmark positions
- 56 by 3 (x, y and z) 3D eye landmark positions
- x, y and z coordinates of the head position
- Roll, pitch and yaw of the head
- 68 by 3 (x, y and z) facial landmarks positions
- 18 by 2 (presence and intensity) AUs
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Fitting OpenFace data to social scientists’ needs</title>
      <p>
        The output from OpenFace is rich and detailed, but, for just this reason, it is not ideal
for data analysis by most social scientists. When OpenFace processes a video (usually
depicting one person’s participation), a long CSV document is produced, in which each
row reports the 538 values noted previously for each frame. OpenFace typically
analyses videos at a 30Hz frame rate, so the standard output is a csv with a number of rows
equal to 30 times the duration of the video in seconds. Such data are perfectly suitable
for time series analysis [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] but many social scientists are not trained in such techniques
and wish to test hypotheses concerning summary statistics (e.g. frequency or mean and
standard deviation) of FNVBs per each person (in a between-subjects design) or per
each person in each condition (in a within-subjects design).
      </p>
      <p>
        To provide data more suitable for the needs of social scientists, we employed the
‘tidy’ framework proposed by Hadley Wickham for easier data analysis and
visualization [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Datasets are defined as tidy if each row corresponds to an observation, each
column corresponds to a variable and each type of observational unit forms a table [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
The challenge for social scientists using OpenFace, therefore, is how to transform
frame-level output into a tidy dataset with output per person or condition. Figure 1
shows an example in which 60 second videos of three people have been analysed with
OpenFace to annotate true smiles and blinks. The left of the figure represents the
OpenFace output with one person per dataset, one frame per row and with each column
representing the absence or presence of a facial action unit. On the right, there is a tidy
dataset in which each row represents a person and each column is a summary of the
person FNVBs, in this case the frequency of true smiles and blinks.
      </p>
      <p>The main goal of OpenFaceR is to provide a set of tools and a workflow for the
creation of such tidy datasets for social scientists.
4</p>
    </sec>
    <sec id="sec-4">
      <title>OpenFaceR workflow</title>
      <p>
        OpenFaceR assists social scientists through a workflow that leads from the analysis of
videos to the consolidation of a tidy dataset with one person per row. Its functions make
extensive use of the tidyverse package [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The tidyverse is a collection of packages
aimed to “facilitate a conversation between a human and a computer about data” [19,
pag.1]. It includes methods for data manipulation, data importing, data tidying, data
manipulation and data visualisation. Notably, OpenFaceR uses and extends the
functions “mutate”,” filter”, “select” and “summarise” from dplyr and makes extensive use
of the pipe sign “%&gt;%” from magrittr. Also, OpenFaceR import and returns datasets
as tibbles[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], a tidyverse equivalent to R base dataframes offering better performance
and visualisation methods.
      </p>
      <p>
        The OpenFaceR workflow is designed to accomplish to the transformation from raw
video material to a tidy dataset. To start the workflow, the user needs the following:
video files of each person (or each person in each condition), the OpenFace software
package, R [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] (we also recommend RStudio [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]), and OpenFaceR. It is easiest if the
videos correspond to the unit of analysis. For example, in a within participants design,
it is easiest if each condition is captured in a separate video file. However, it is possible
to extract sections of videos by filtering which is described later. OpenFace is
implemented in python [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and pyTorch [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. Detailed instructions for Linux and MacOS
X installation are provided at https://cmusatyalab.github.io/openface/setup/.
Instructions for the installation of the executable file for windows are provided here:
https://github.com/TadasBaltrusaitis/OpenFace/wiki/Windows-Installation. R can be
downloaded from the CRAN repository (https://cran.r-project.org/). At present, the
OpenFaceR toolkit can be downloaded from GitHub at
https://github.com/davidecannatanuig/, but installation using the devtools R package will be implemented in the near
future. OpenFaceR requires the following R packages to be installed: tidyverse [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] and
pracma [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] .Fig. 2, below, shows the six steps of the process, which are extensively
discussed in the next paragraphs.
      </p>
      <p>To facilitate readers’ comprehension of this six-step process, we employ an example
of a simple psychological experiment investigating the effects of positive and negative
memories on facial behaviours. In our example, the researcher has collected videos of
50 students telling one story about a personal success and one story about a personal
failure in front a camera. The hypothesis is that students will smile more frequently and
will display more intense facial activity in the success story condition.
4.1</p>
      <sec id="sec-4-1">
        <title>Videos to CSVs using OpenFace</title>
        <p>Prior to using the utility functions in OpenFaceR, users must process their videos using
OpenFace. To help users produce the appropriate syntax for these commands in
Windows, OpenFaceR provides the function get_commands() that outputs the commands
and parameters for executing OpenFace on a single video or on a set of videos contained
in a folder. After the user runs get_commands(), the user can copy and paste the output
of get_commands() at the command line to initiate the analysis or analyses. In the
example described above, the input_dir is the folder containing the 100 video files
recording the students telling their stories. The output_dir will hold the 100 csvs produced by
OpenFace. The duration of this process is dependent on the user’s computer hardware,
specifically the GPU, CPU and storage medium (e.g. SSD drive).
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>CSVs to faces objects</title>
        <p>From Step 2, the remaining steps are completed in the R environment. The function
read_faces_csv() allows the user to import all the csv files contained in a folder into an
object of class “faces”. Faces is a new bespoken C3 class that inherits from lists, and
in fact is a list of tibbles, with each tibble representing the output from one video. In
our example, read_faces_csv will import the 100 csv files saved in the output_dir from
the previous step and produce a faces object containing 100 tibbles. The time for the
machine to perform this operation, although depending by the local machine
specifications, can be significant.
4.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Filtering</title>
        <p>It is often necessary to filter out certain faces or conditions due to errors in the extraction
of data, low confidence and so on. The verb filter_faces() allows the user to filter all
the tibbles of a faces object, with a grammar that echoes the dplyr filter method. A
typical filter is set up for “success”, the variable indicating whether the extraction of
data was reliably done for each frame of the video. It is also possible to standardize the
extraction parameters of the videos by filtering. For example, to standardize the
duration of videos that will be analyzed, one can filter the timestamps, restricting them to
minimum and maximum values. In our example, the researcher wants to standardize
the length of videos as time might also influence the production of smiles and face
activity. They can therefore use filter_face(timestamp &lt; 180) to take only the first 3
minutes of each video. The result can be stored in a new filtered faces object or by using
the pipe %&gt;% sign, steps 3 to 6 can be conducted in a series and outputted in a tidy
dataframe.
4.4</p>
      </sec>
      <sec id="sec-4-4">
        <title>Features engineering</title>
        <p>
          OpenFaceR provides two verbs to manipulate the variables of each video and
engineering new features. mutate_faces() echoes dplyr::mutate(). The function
transform_faces() meets the need of using transformation functions that take as input a preset
selection of variables, as opposed to the mutate method which is designed for working
with user specified variables. The two verbs are accompanied by a growing number of
functions specifically designed for analyzing faces. In our example the researcher uses
mutate_faces(smile = ifelse(AU06_c + AU12_c) == 2, 1, 0) to calculate when the
experimental subjects are displaying the two AUs characterizing smiles. Furthermore they
will use the function transform_faces(“mei”, mei) for calculating the corrected average
motion of the face region[
          <xref ref-type="bibr" rid="ref25">25</xref>
          ], a measure of facial activity.
4.5
        </p>
      </sec>
      <sec id="sec-4-5">
        <title>Selection of features</title>
        <p>We have implemented the verb select_faces() to select which features are eventually
summarised, echoing the select() method from dplyr. As frame, timestamp and success
are critical meta-variables, select_faces() always returns those in the output. In our
example the researcher will use select_faces(smile, mei) to select the two variables they
intend to summarise.</p>
      </sec>
      <sec id="sec-4-6">
        <title>Tidy dataset consolidation</title>
        <p>The function tidy_face() is designed to transform a preprocessed faces object into a tidy
dataset with one person per row and all the most common statistics. Fig. 3 summarizes
the function’s architecture. Here, the arrows represent the logical steps, while the boxes
represent the methods used, including the inputs they take from the main function. First,
tidy_face() merges all the tibbles of the faces object into one single tibble through the
merge_faces() method. Second, it calculates the length of each video. Third, it classifies
the variables into continuous (e.g. distance of the face from the camera) and discrete
(events, such as blinks and smiles). Fourth, if the events parameter is set as true (default)
all the discrete variables are summarized. Events can be summarized by simply
counting them (“count”), or as events per second (“eps”), events per minute (“epm”) or
events ratio (“ratio”, the number of frames in which the event happen divided the total
number of frames). Fifth, if the continuous parameter is set as True (default) all the
continuous variables are summarized with a choice of methods including mean,
median, standard deviation, minimum and maximum. Finally, all the summarized
variables are merged into a tidy dataset.</p>
        <p>In our example, calling tidy_faces(events_sum = “epm”, median = TRUE) will
return one data frame with 100 rows (one per video) with columns for video ID, video
duration, mean, standard deviation and median of facial activity and the number of
smiles per minutes. The researchers can then test their hypotheses by running t-tests, a
repeated measures MANOVA or other statistical approaches available in R or other
packages.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>In this paper we have outlined and explained the goal of OpenFaceR and outlined the
main characteristics of the workflow from raw video data to a dataset that can be used
for typical statistical analysis in social sciences. OpenFaceR is still at its infancy and
new functions are been built, providing the most common methods of summarizing
FNVBs. The final goal of this enterprise is to compile an R package to publish on
CRAN. Questions, feedback, collaborations, and ideas are welcome.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Jack</surname>
            ,
            <given-names>R. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schyns</surname>
            ,
            <given-names>P. G.</given-names>
          </string-name>
          :
          <article-title>Toward a social Psychophysics of Face Communication</article-title>
          .
          <source>Annu. Rev. Psychol</source>
          .
          <volume>68</volume>
          ,
          <fpage>269</fpage>
          -
          <lpage>297</lpage>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .1146/annurev-psych-
          <volume>010416</volume>
          -044242
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gunnery</surname>
            ,
            <given-names>S. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrzejewski</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          :
          <article-title>Nonverbal Emotion Displays , Communication Modality , and the Judgment of Personality</article-title>
          .
          <source>J. Res. Pers</source>
          .
          <volume>45</volume>
          ,
          <fpage>77</fpage>
          -
          <lpage>83</lpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beattie</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shovelton</surname>
          </string-name>
          , H.:
          <article-title>Nonverbal indicators of deception: How iconic gestures reveal thoughts that cannot be suppressed</article-title>
          .
          <source>Semiotica</source>
          <year>2010</year>
          ,
          <fpage>133</fpage>
          -
          <lpage>174</lpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Grahe</surname>
            ,
            <given-names>J. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernieri</surname>
            ,
            <given-names>F. J.:</given-names>
          </string-name>
          <article-title>The importance of nonverbal cues in judging rapport</article-title>
          .
          <source>J. Nonverbal Behav</source>
          .
          <volume>23</volume>
          ,
          <fpage>253</fpage>
          -
          <lpage>269</lpage>
          (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Breil</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Osterholz</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nestler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Back</surname>
          </string-name>
          , M. D.:
          <article-title>Contributions of Nonverbal Cues to the Accurate Judgment of Personality Traits</article-title>
          . In: Letzring,
          <string-name>
            <given-names>T. D.</given-names>
            ,
            <surname>Spain</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . (eds.)
          <source>The Oxford Handbook of Accurate Personality Judgment</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>54</lpage>
          . Oxford University Press, Oxford, UK (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Arnold</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Winkielman</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>The Mimicry Among Us: Intra-</article-title>
          and
          <string-name>
            <surname>Inter-Personal Mechanisms</surname>
          </string-name>
          of Spontaneous Mimicry.
          <source>J. Nonverbal Behav</source>
          .
          <volume>44</volume>
          ,
          <fpage>195</fpage>
          -
          <lpage>212</lpage>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Back</surname>
            ,
            <given-names>M. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nestler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Accuracy of Judging Personality</article-title>
          . In: Hall,
          <string-name>
            <given-names>J. A.</given-names>
            ,
            <surname>Schmid</surname>
          </string-name>
          <string-name>
            <surname>Mast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>West</surname>
          </string-name>
          , T. (eds.)
          <source>The Social Psychology of Perceiving Others Accurately</source>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>124</lpage>
          . Cambridge University Press, Cambridge, UK (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Funder</surname>
            ,
            <given-names>D. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Furr</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Colvin</surname>
            ,
            <given-names>C. R.:</given-names>
          </string-name>
          <article-title>The Riverside Behavioral Q-sort: A Tool for the Description of Social Behavior</article-title>
          .
          <source>J. Pers</source>
          .
          <volume>68</volume>
          ,
          <fpage>451</fpage>
          -
          <lpage>489</lpage>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Grünberg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mattern</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geukes</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Küfner</surname>
            ,
            <given-names>A. C. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Back</surname>
            ,
            <given-names>M. D.</given-names>
          </string-name>
          : Assessing Group Interactions in Personality Psychology.
          <source>In: The Cambridge Handbook of Group Interaction Analysis</source>
          <volume>53</volume>
          ,
          <fpage>602</fpage>
          -
          <lpage>611</lpage>
          . Cambridge University Press, UK (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ekman</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friesen</surname>
            ,
            <given-names>W. V.:</given-names>
          </string-name>
          <article-title>Facial action coding system: A technique for the measurement of facial movement</article-title>
          . Consulting Psychologist Press, Berkeley, CA (
          <year>1978</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Furr</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Funder</surname>
            ,
            <given-names>D. C.</given-names>
          </string-name>
          :
          <article-title>Behavioral observation</article-title>
          . In: Robins,
          <string-name>
            <given-names>R. W.</given-names>
            ,
            <surname>Fraley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Krueger</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. F</surname>
          </string-name>
          . (eds.) Handbook of Research Methods in Personality Psychology. Guilford Press, NY (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Cannata</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simon</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lepri</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Back</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. D.</surname>
            ,
            <given-names>O</given-names>
          </string-name>
          <string-name>
            <surname>'Hora</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Toward an Integrative Approach to Nonverbal Personality Detection</article-title>
          . (In press).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Baltrušaitis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zadeh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>Y. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morency</surname>
            ,
            <given-names>L.P.:</given-names>
          </string-name>
          <article-title>OpenFace 2.0: Facial Behavior Analysis Toolkit</article-title>
          .
          <source>in 2018 13th IEEE International Conference on Automatic Face &amp; Gesture Recognition (FG</source>
          <year>2018</year>
          )
          <volume>59</volume>
          -
          <fpage>66</fpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Zadeh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>Y. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baltrušaitis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morency</surname>
            ,
            <given-names>L. P.</given-names>
          </string-name>
          :
          <article-title>Convolutional experts constrained local model for 3D facial landmark detection</article-title>
          .
          <source>Proc. - 2017 IEEE Int. Conf. Comput. Vis. Work. ICCVW</source>
          <year>2017</year>
          2018-Janua,
          <fpage>2519</fpage>
          -
          <lpage>2528</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Wood</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          et al.
          <article-title>Rendering of Eyes for Eye-Shape Registration and Gaze Estimation Erroll</article-title>
          .
          <source>in Proceeding:s of the IEEE International Conference on Computer Vision</source>
          ,
          <fpage>3756</fpage>
          -
          <lpage>3764</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Baltrušaitis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mahmoud</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Robinson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Cross-dataset learning and person-specific normalisation for automatic Action Unit detection</article-title>
          .
          <source>in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG</source>
          )
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Paxton</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dale</surname>
          </string-name>
          , R.:
          <article-title>Frame-differencing methods for measuring bodily synchrony in conversation</article-title>
          .
          <source>Behav. Res. Meth.</source>
          ,
          <volume>45</volume>
          (
          <issue>2</issue>
          ),
          <fpage>329</fpage>
          -
          <lpage>343</lpage>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Wickham</surname>
          </string-name>
          , H.:
          <article-title>Tidy Data</article-title>
          .
          <source>J. Stat. Softw</source>
          .
          <volume>59</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Wickham</surname>
          </string-name>
          , H. et al.:
          <article-title>Welcome to the Tidyverse</article-title>
          .
          <source>J. Open Source Softw</source>
          .
          <volume>4</volume>
          ,
          <issue>1686</issue>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Müller</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wickham</surname>
          </string-name>
          , H.:
          <article-title>Tibble: Simple Data Frames</article-title>
          . (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>R</given-names>
            <surname>Core Team</surname>
          </string-name>
          :
          <article-title>A Language and Environment for Statistical Computing</article-title>
          . (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <given-names>RStudio</given-names>
            <surname>Team. RStudio: Integrated Development Environment for R.</surname>
          </string-name>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23. Van Rossum,
          <string-name>
            <given-names>G.</given-names>
            &amp;
            <surname>Drake</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. L.</surname>
          </string-name>
          :
          <article-title>Python 3 Reference Manual</article-title>
          .
          <source>(CreateSpace</source>
          ,
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Pazske</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al:.
          <source>Automatic differentiation in PyTorch. in 31st Conference on Neural Information Processing Systems (NIPS</source>
          <year>2017</year>
          )
          <article-title>(</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Brochers</surname>
            ,
            <given-names>H. W.</given-names>
          </string-name>
          : Pracma: Practical Numerical Math Functions. (
          <year>2017</year>
          ).
          <source>R package 2.0.7.</source>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Ramseyer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <article-title>T: Motion energy analysis (MEA): A primer on the assessment of motion from video</article-title>
          .
          <source>J. Couns. Psychol</source>
          .
          <volume>67</volume>
          ,
          <fpage>536</fpage>
          -
          <lpage>549</lpage>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>