<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">OpenFaceR: Developing an R Package for the convenient analysis of OpenFace facial information 12</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">National University of Galway</orgName>
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">OpenFaceR: Developing an R Package for the convenient analysis of OpenFace facial information 12</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A43FC5A84E000872907F5B3C16B68485</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T19:04+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>OpenFace</term>
					<term>R</term>
					<term>Nonverbal Behaviors</term>
					<term>Face</term>
					<term>Computer Vision</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>OpenFace is an open source tool designed to extract the most commonly used facial information from videos including facial points, head pose, gaze and Facial Action Units. OpenFaceR is a tool designed to help social scientists, and other researchers from less technical disciplines, who are interested in facial nonverbal behaviors (FNVBs), to easily use output from OpenFace 2.0. The output from OpenFace is one csv file for each video, with information on each feature for each frame of the analyzed video provided in rows. OpenFaceR constitutes a set of methods to convert information in this format into relevant summary statistics. In this paper, we focus on the set of methods in OpenFaceR to extract information from a series of videos and transform the output files into a single dataset in which each row reports the summary values of a feature for one video.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Humans are social animals, capable of complex and variable behaviour. The face is a central element of human sociality <ref type="bibr" target="#b0">[1]</ref>, since it provides rich information for immediate social judgments through static cues (e.g. biometrics, skin colour, feminine/masculine features, regional traits etc..) and dynamic cues (e.g. smiles, blinks, gaze, emotion expression etc…). The latter, also called Facial Nonverbal Behaviors (FNVBs), have been widely studied in many fields such as display of emotions <ref type="bibr" target="#b1">[2]</ref>, lie detection <ref type="bibr" target="#b2">[3]</ref>, interpersonal relations <ref type="bibr" target="#b3">[4]</ref>, and personality recognition <ref type="bibr" target="#b4">[5]</ref>. Research on FNVBs can be further divided into two streams which require different techniques of data collection and data analysis. The first one is concerned with how facial expressions change in time within subjects (e.g. studies on mimicry or on emotional reactions <ref type="bibr" target="#b5">[6]</ref>) and therefore require FNVBs data per each temporal unit. The second stream is concerned about how FNVBs differ between individuals (e.g. nonverbal expression of personality <ref type="bibr" target="#b6">[7]</ref>) or within the same individuals in different conditions (e.g. being ingenuous vs being deceitful <ref type="bibr" target="#b2">[3]</ref>). In this second case, that is the focus of this paper, the analysis is performed on summary measures of FNVBs, such as their frequency.</p><p>FNVBs are traditionally annotated manually, through one of the many existing scales (ex: Riverside Q-Sort <ref type="bibr" target="#b7">[8]</ref>; Münster Behavior coding system <ref type="bibr" target="#b8">[9]</ref>). One of the most popular is the Facial Actions Coding System (FACS) by Paul Ekman <ref type="bibr" target="#b9">[10]</ref>, which analyses the smallest independent movements of the facial muscles, called Action Units (AUs). The FACS provides with a detailed and objective approach to the classification of FNVBs, but manual annotation of AUs is a demanding job which requires considerable amount of time of well-trained observers. <ref type="bibr" target="#b10">[11]</ref>. Recently, progresses in computer vision has allowed the development of software for automatic analysis and recognition of facial static and dynamic characteristics <ref type="bibr" target="#b11">[12]</ref>. Amongst those OpenFace, an opensource software developed at Cambridge University by Baltrusaitis and colleagues <ref type="bibr" target="#b12">[13]</ref>, is one of the most used in the social sciences, with 753 citations by August 16, 2020. OpenFaceR, the GitHub repository presented in this paper, includes a set of R functions intended to facilitate the use of OpenFace 2.0 for social scientists.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">OpenFace</head><p>The major goal of OpenFace is to provide a comprehensive, open source and free tool for describing facial behaviors <ref type="bibr" target="#b12">[13]</ref>. OpenFace estimates the status of four different types of feature: facial landmarks; head pose; eye gaze; and facial expressions. The x, y and z position of 67 facial landmarks are identified using a Convolutional Experts Constrained Local Model <ref type="bibr" target="#b13">[14]</ref>. Based on these values, head pose is estimated through the orthographic projection of an internal 3D representation of the facial landmarks <ref type="bibr" target="#b12">[13]</ref>. To estimate the direction of eye gaze, OpenFace first uses a Constrained Local Neural Field to detect eyelids, pupils and iris. Then, an eyeball model and head pose information are incorporated in a complex process to estimate gaze direction <ref type="bibr" target="#b14">[15]</ref>. Finally, OpenFace makes use of a linear kernel Support Vector approach to describe18 AUs <ref type="bibr" target="#b15">[16]</ref> (e.g., movement of the lip corner puller, the muscle we use to smile). For each AU it estimates its intensity (e.g. a number between 0 and 1 describing how much the lip corner puller is contracted) and its presence (e.g,, if the movement of the lip corner puller is large enough for being observed as a smile). OpenFace 2.0 has been tested through two different datasets achieving state of the art performance, despite comparatively low computational demands <ref type="bibr" target="#b12">[13]</ref>. The software can be run through the command prompt to analyse a single video or multiple videos stored in a folder. The output, for each video, is a Comma Separated Values (CSV) file including 538 values for each frame:</p><p>-frame number -timestamp -confidence (how accurate the analysis of the frame is likely to be) and success (whether confidence is high enough) -x, y and z coordinates of the gaze for each eye -z and y polar coordinates of the gaze angle Fitting OpenFace data to social scientists' needs</p><p>The output from OpenFace is rich and detailed, but, for just this reason, it is not ideal for data analysis by most social scientists. When OpenFace processes a video (usually depicting one person's participation), a long CSV document is produced, in which each row reports the 538 values noted previously for each frame. OpenFace typically analyses videos at a 30Hz frame rate, so the standard output is a csv with a number of rows equal to 30 times the duration of the video in seconds. Such data are perfectly suitable for time series analysis <ref type="bibr" target="#b16">[17]</ref> but many social scientists are not trained in such techniques and wish to test hypotheses concerning summary statistics (e.g. frequency or mean and standard deviation) of FNVBs per each person (in a between-subjects design) or per each person in each condition (in a within-subjects design).</p><p>To provide data more suitable for the needs of social scientists, we employed the 'tidy' framework proposed by Hadley Wickham for easier data analysis and visualization <ref type="bibr" target="#b17">[18]</ref>. Datasets are defined as tidy if each row corresponds to an observation, each column corresponds to a variable and each type of observational unit forms a table <ref type="bibr" target="#b17">[18]</ref>. The challenge for social scientists using OpenFace, therefore, is how to transform frame-level output into a tidy dataset with output per person or condition. Figure <ref type="figure">1</ref> shows an example in which 60 second videos of three people have been analysed with OpenFace to annotate true smiles and blinks. The left of the figure represents the Open-Face output with one person per dataset, one frame per row and with each column representing the absence or presence of a facial action unit. On the right, there is a tidy dataset in which each row represents a person and each column is a summary of the person FNVBs, in this case the frequency of true smiles and blinks. Fig. <ref type="figure">1</ref> Conceptual example of transformation from OpenFace output (on the left) to a tidy dataset (on the right) in which each row correspond to a person. AU_6 (cheek raiser) and AU_12 (lip corner puller) in combination signal a true smile. AU-45 represents a blink.</p><p>The main goal of OpenFaceR is to provide a set of tools and a workflow for the creation of such tidy datasets for social scientists.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">OpenFaceR workflow</head><p>OpenFaceR assists social scientists through a workflow that leads from the analysis of videos to the consolidation of a tidy dataset with one person per row. Its functions make extensive use of the tidyverse package <ref type="bibr" target="#b18">[19]</ref>. The tidyverse is a collection of packages aimed to "facilitate a conversation between a human and a computer about data" <ref type="bibr">[19, pag.1]</ref>. It includes methods for data manipulation, data importing, data tidying, data manipulation and data visualisation. Notably, OpenFaceR uses and extends the functions "mutate"," filter", "select" and "summarise" from dplyr and makes extensive use of the pipe sign "%&gt;%" from magrittr. Also, OpenFaceR import and returns datasets as tibbles <ref type="bibr" target="#b19">[20]</ref>, a tidyverse equivalent to R base dataframes offering better performance and visualisation methods. The OpenFaceR workflow is designed to accomplish to the transformation from raw video material to a tidy dataset. To start the workflow, the user needs the following: video files of each person (or each person in each condition), the OpenFace software package, R <ref type="bibr" target="#b20">[21]</ref> (we also recommend RStudio <ref type="bibr" target="#b21">[22]</ref>), and OpenFaceR. It is easiest if the videos correspond to the unit of analysis. For example, in a within participants design, it is easiest if each condition is captured in a separate video file. However, it is possible to extract sections of videos by filtering which is described later. OpenFace is implemented in python <ref type="bibr" target="#b22">[23]</ref> and pyTorch <ref type="bibr" target="#b23">[24]</ref>. Detailed instructions for Linux and MacOS X installation are provided at https://cmusatyalab.github.io/openface/setup/. Instructions for the installation of the executable file for windows are provided here: https://github.com/TadasBaltrusaitis/OpenFace/wiki/Windows-Installation. R can be downloaded from the CRAN repository (https://cran.r-project.org/). At present, the OpenFaceR toolkit can be downloaded from GitHub at https://github.com/davidecannatanuig/, but installation using the devtools R package will be implemented in the near future. OpenFaceR requires the following R packages to be installed: tidyverse <ref type="bibr" target="#b18">[19]</ref> and pracma <ref type="bibr" target="#b24">[25]</ref> .Fig. <ref type="figure">2</ref>, below, shows the six steps of the process, which are extensively discussed in the next paragraphs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 2 OpenFaceR workflow and an operative example</head><p>To facilitate readers' comprehension of this six-step process, we employ an example of a simple psychological experiment investigating the effects of positive and negative memories on facial behaviours. In our example, the researcher has collected videos of 50 students telling one story about a personal success and one story about a personal failure in front a camera. The hypothesis is that students will smile more frequently and will display more intense facial activity in the success story condition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Videos to CSVs using OpenFace</head><p>Prior to using the utility functions in OpenFaceR, users must process their videos using OpenFace. To help users produce the appropriate syntax for these commands in Windows, OpenFaceR provides the function get_commands() that outputs the commands and parameters for executing OpenFace on a single video or on a set of videos contained in a folder. After the user runs get_commands(), the user can copy and paste the output of get_commands() at the command line to initiate the analysis or analyses. In the example described above, the input_dir is the folder containing the 100 video files recording the students telling their stories. The output_dir will hold the 100 csvs produced by OpenFace. The duration of this process is dependent on the user's computer hardware, specifically the GPU, CPU and storage medium (e.g. SSD drive).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">CSVs to faces objects</head><p>From Step 2, the remaining steps are completed in the R environment. The function read_faces_csv() allows the user to import all the csv files contained in a folder into an object of class "faces". Faces is a new bespoken C3 class that inherits from lists, and in fact is a list of tibbles, with each tibble representing the output from one video. In our example, read_faces_csv will import the 100 csv files saved in the output_dir from the previous step and produce a faces object containing 100 tibbles. The time for the machine to perform this operation, although depending by the local machine specifications, can be significant.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Filtering</head><p>It is often necessary to filter out certain faces or conditions due to errors in the extraction of data, low confidence and so on. The verb filter_faces() allows the user to filter all the tibbles of a faces object, with a grammar that echoes the dplyr filter method. A typical filter is set up for "success", the variable indicating whether the extraction of data was reliably done for each frame of the video. It is also possible to standardize the extraction parameters of the videos by filtering. For example, to standardize the duration of videos that will be analyzed, one can filter the timestamps, restricting them to minimum and maximum values. In our example, the researcher wants to standardize the length of videos as time might also influence the production of smiles and face activity. They can therefore use filter_face(timestamp &lt; 180) to take only the first 3 minutes of each video. The result can be stored in a new filtered faces object or by using the pipe %&gt;% sign, steps 3 to 6 can be conducted in a series and outputted in a tidy dataframe.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Features engineering</head><p>OpenFaceR provides two verbs to manipulate the variables of each video and engineering new features. mutate_faces() echoes dplyr::mutate(). The function trans-form_faces() meets the need of using transformation functions that take as input a preset selection of variables, as opposed to the mutate method which is designed for working with user specified variables. The two verbs are accompanied by a growing number of functions specifically designed for analyzing faces. In our example the researcher uses mutate_faces(smile = ifelse(AU06_c + AU12_c) == 2, 1, 0) to calculate when the experimental subjects are displaying the two AUs characterizing smiles. Furthermore they will use the function transform_faces("mei", mei) for calculating the corrected average motion of the face region <ref type="bibr" target="#b24">[25]</ref>, a measure of facial activity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5">Selection of features</head><p>We have implemented the verb select_faces() to select which features are eventually summarised, echoing the select() method from dplyr. As frame, timestamp and success are critical meta-variables, select_faces() always returns those in the output. In our example the researcher will use select_faces(smile, mei) to select the two variables they intend to summarise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.6">Tidy dataset consolidation</head><p>The function tidy_face() is designed to transform a preprocessed faces object into a tidy dataset with one person per row and all the most common statistics. Fig. <ref type="figure">3</ref> summarizes the function's architecture. Here, the arrows represent the logical steps, while the boxes represent the methods used, including the inputs they take from the main function. First, tidy_face() merges all the tibbles of the faces object into one single tibble through the merge_faces() method. Second, it calculates the length of each video. Third, it classifies the variables into continuous (e.g. distance of the face from the camera) and discrete (events, such as blinks and smiles). Fourth, if the events parameter is set as true (default) all the discrete variables are summarized. Events can be summarized by simply counting them ("count"), or as events per second ("eps"), events per minute ("epm") or events ratio ("ratio", the number of frames in which the event happen divided the total number of frames). Fifth, if the continuous parameter is set as True (default) all the continuous variables are summarized with a choice of methods including mean, median, standard deviation, minimum and maximum. Finally, all the summarized variables are merged into a tidy dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 3 Tidy_face architecture</head><p>In our example, calling tidy_faces(events_sum = "epm", median = TRUE) will return one data frame with 100 rows (one per video) with columns for video ID, video duration, mean, standard deviation and median of facial activity and the number of smiles per minutes. The researchers can then test their hypotheses by running t-tests, a repeated measures MANOVA or other statistical approaches available in R or other packages.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions</head><p>In this paper we have outlined and explained the goal of OpenFaceR and outlined the main characteristics of the workflow from raw video data to a dataset that can be used for typical statistical analysis in social sciences. OpenFaceR is still at its infancy and new functions are been built, providing the most common methods of summarizing FNVBs. The final goal of this enterprise is to compile an R package to publish on CRAN. Questions, feedback, collaborations, and ideas are welcome.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="5,136.08,147.48,358.80,178.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="7,124.80,359.40,386.88,182.40" type="bitmap" /></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>-56 by 2 (x and y) 2D eye landmark positions -56 by 3 (x, y and z) 3D eye landmark positions -x, y and z coordinates of the head position -Roll, pitch and yaw of the head -68 by 3 (x, y and z) facial landmarks positions -18 by 2 (presence and intensity) AUs</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Toward a social Psychophysics of Face Communication</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Jack</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">G</forename><surname>Schyns</surname></persName>
		</author>
		<idno type="DOI">10.1146/annurev-psych-010416-044242</idno>
	</analytic>
	<monogr>
		<title level="j">Annu. Rev. Psychol</title>
		<imprint>
			<biblScope unit="volume">68</biblScope>
			<biblScope unit="page" from="269" to="297" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Nonverbal Emotion Displays , Communication Modality , and the Judgment of Personality</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Hall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">D</forename><surname>Gunnery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Andrzejewski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Res. Pers</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="77" to="83" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Nonverbal indicators of deception: How iconic gestures reveal thoughts that cannot be suppressed</title>
		<author>
			<persName><forename type="first">D</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Beattie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Shovelton</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semiotica</title>
		<imprint>
			<biblScope unit="volume">2010</biblScope>
			<biblScope unit="page" from="133" to="174" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The importance of nonverbal cues in judging rapport</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Grahe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Bernieri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Nonverbal Behav</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="253" to="269" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Contributions of Nonverbal Cues to the Accurate Judgment of Personality Traits</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Breil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Osterholz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nestler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Back</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Oxford Handbook of Accurate Personality Judgment</title>
				<editor>
			<persName><forename type="first">T</forename><forename type="middle">D</forename><surname>Letzring</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Spain</surname></persName>
		</editor>
		<meeting><address><addrLine>Oxford, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="54" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The Mimicry Among Us: Intra-and Inter-Personal Mechanisms of Spontaneous Mimicry</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Arnold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Winkielman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Nonverbal Behav</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="195" to="212" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Accuracy of Judging Personality</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Back</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nestler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Social Psychology of Perceiving Others Accurately</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Hall</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Schmid Mast</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>West</surname></persName>
		</editor>
		<meeting><address><addrLine>Cambridge, UK</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="98" to="124" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The Riverside Behavioral Q-sort: A Tool for the Description of Social Behavior</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Funder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Furr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">R</forename><surname>Colvin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Pers</title>
		<imprint>
			<biblScope unit="volume">68</biblScope>
			<biblScope unit="page" from="451" to="489" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Assessing Group Interactions in Personality Psychology</title>
		<author>
			<persName><forename type="first">M</forename><surname>Grünberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mattern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Geukes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C P</forename><surname>Küfner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Back</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Cambridge Handbook of Group Interaction Analysis</title>
				<meeting><address><addrLine>UK</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="page" from="602" to="611" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Facial action coding system: A technique for the measurement of facial movement</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ekman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">V</forename><surname>Friesen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1978">1978</date>
			<publisher>Consulting Psychologist Press</publisher>
			<pubPlace>Berkeley, CA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Behavioral observation</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Furr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">C</forename><surname>Funder</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of Research Methods in Personality Psychology</title>
				<editor>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Robins</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Fraley</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><forename type="middle">F</forename><surname>Krueger</surname></persName>
		</editor>
		<meeting><address><addrLine>NY</addrLine></address></meeting>
		<imprint>
			<publisher>Guilford Press</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Toward an Integrative Approach to Nonverbal Personality Detection</title>
		<author>
			<persName><forename type="first">D</forename><surname>Cannata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lepri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Back</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>O'hora</surname></persName>
		</author>
		<imprint/>
	</monogr>
	<note>In press</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">OpenFace 2.0: Facial Behavior Analysis Toolkit</title>
		<author>
			<persName><forename type="first">T</forename><surname>Baltrušaitis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">C</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">P</forename><surname>Morency</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">13th IEEE International Conference on Automatic Face &amp; Gesture Recognition</title>
				<meeting><address><addrLine>FG</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018. 2018</date>
			<biblScope unit="page" from="59" to="66" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Convolutional experts constrained local model for 3D facial landmark detection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Zadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">C</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Baltrušaitis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">P</forename><surname>Morency</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. -2017 IEEE Int. Conf. Comput. Vis. Work. ICCVW 2017 2018-Janua</title>
				<meeting>-2017 IEEE Int. Conf. Comput. Vis. Work. ICCVW 2017 2018-Janua</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="2519" to="2528" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Rendering of Eyes for Eye-Shape Registration and Gaze Estimation Erroll</title>
		<author>
			<persName><forename type="first">E</forename><surname>Wood</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceeding:s of the IEEE International Conference on Computer Vision</title>
				<meeting>eeding:s of the IEEE International Conference on Computer Vision</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="3756" to="3764" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Cross-dataset learning and person-specific normalisation for automatic Action Unit detection</title>
		<author>
			<persName><forename type="first">T</forename><surname>Baltrušaitis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mahmoud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Robinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)</title>
				<imprint>
			<date type="published" when="2015">2015. 2015</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Frame-differencing methods for measuring bodily synchrony in conversation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Paxton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behav. Res. Meth</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="329" to="343" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Tidy Data</title>
		<author>
			<persName><forename type="first">H</forename><surname>Wickham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Stat. Softw</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="1" to="23" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Welcome to the Tidyverse</title>
		<author>
			<persName><forename type="first">H</forename><surname>Wickham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Open Source Softw</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page">1686</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Tibble: Simple Data Frames</title>
		<author>
			<persName><forename type="first">K</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wickham</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m">R Core Team: A Language and Environment for Statistical Computing</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m">RStudio: Integrated Development Environment for R</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Python 3 Reference Manual</title>
		<author>
			<persName><forename type="first">G</forename><surname>Van Rossum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">L</forename><surname>Drake</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>CreateSpace</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Automatic differentiation in PyTorch</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pazske</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">31st Conference on Neural Information Processing Systems</title>
				<meeting><address><addrLine>NIPS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Pracma: Practical Numerical Math Functions</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Brochers</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>R package 2.0.7</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Motion energy analysis (MEA): A primer on the assessment of motion from video</title>
		<author>
			<persName><forename type="first">F</forename><surname>Ramseyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Couns. Psychol</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="page" from="536" to="549" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
