<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Deep Learning-Enhanced Detection of Lie Tendencies through Answer Pattern Analysis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Debanil</forename><surname>Chanda</surname></persName>
							<email>dchanda6@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science &amp; Technology</orgName>
								<orgName type="institution">University of North Bengal)</orgName>
								<address>
									<addrLine>Raja Rammohanpur</addrLine>
									<postCode>734013</postCode>
									<settlement>Darjeeling</settlement>
									<region>West Bengal</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Rakesh</forename><surname>Kumar Mandal</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science &amp; Technology</orgName>
								<orgName type="institution">University of North Bengal)</orgName>
								<address>
									<addrLine>Raja Rammohanpur</addrLine>
									<postCode>734013</postCode>
									<settlement>Darjeeling</settlement>
									<region>West Bengal</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Deep Learning-Enhanced Detection of Lie Tendencies through Answer Pattern Analysis</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">717B356CC622E440C9F10CD9C8F0F0DD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Lie detection</term>
					<term>Deep Learning</term>
					<term>Answer Pattern Analysis</term>
					<term>Strategic Interview Technique (SIT)</term>
					<term>Behavioral Analysis</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Detecting deceptive behavior is a critical challenge across various domains, including security, recruitment, and criminal investigations. Traditional methods, such as polygraphs, rely on physiological cues and often lack reliability and scalability. This study introduces a deep learning-based methodology that enhances deception detection through the analysis of answer patterns derived from a Strategic Interview Technique (SIT) and publicly available datasets, including LIAR and Deceptive Opinion Spam. By integrating cognitive behavioral features such as response consistency, delay, and reactions to unexpected questions with textual embeddings generated from Bi-LSTM networks, the model provides a comprehensive framework for detecting lie tendencies. The proposed method demonstrates exceptional performance, achieving an accuracy of 89.5% and an F1-score of 88.9%, outperforming recent studies in the field. Comparative analysis highlights its robustness in distinguishing truthful and deceptive responses across structured and unstructured data. Error analysis reveals areas for refinement, including addressing false positives caused by ambiguous responses and false negatives in rehearsed deception. The model's reliance on cost-effective and non-invasive features makes it scalable and practical for real-world applications. This work lays the foundation for integrating multimodal data, such as audio and video, to further enhance the effectiveness of deception detection systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction 1.Significance</head><p>Lie detection is a critical area of research with applications in law enforcement, recruitment, and psychological assessments. Traditional methods, such as polygraphs, rely on physiological signals but face criticism for being invasive and susceptible to countermeasures and prone to manipulation <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. Advances in behavioral and linguistic analysis offer a more robust alternative <ref type="bibr" target="#b2">[3]</ref>. Answer patterns during structured interviews, for instance, provide cognitive and behavioral cues that are valuable for detecting deception <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. Some lie detection techniques rely on question-answering approaches, such as the Pattern Variation Method to Detect Lie using Artificial Neural Network (PVMANN) and the Pattern Variation Method with Modified Weights to Detect Lie using Artificial Neural Network (PVMMWANN) <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. Both methods only require a personal computer, with suspects interviewed in a tension-free environment. In these methods, the same questions are asked daily over several days. It was believed that longer intervals between interviews might lead to inconsistencies in a liar's answers, as repeated interrogation could exploit cognitive strain <ref type="bibr" target="#b7">[8]</ref>. However, studies have shown that liars can be as consistent as truthful individuals, even with extended intervals between interviews. This creates challenges for traditional repetitive questioning approaches, as liars may rehearse their answers to appear truthful <ref type="bibr" target="#b8">[9]</ref>. To address this, interviews should be conducted strategically, where repeating the same answer becomes difficult for the suspect. Strategically framed questions make it easier to detect deception without relying on visible negative signs. Such techniques enable the distinction between truthful and dishonest individuals by introducing subtle variations in the questioning process, forcing liars to engage cognitively in ways that reveal inconsistencies <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2.">Related Work</head><p>Lie detection has been an essential focus of study, with traditional methods such as polygraph testing relying on physiological signals like heart rate, skin conductance, and respiratory patterns <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>. While widely used, these methods have several limitations, including invasiveness, high dependency on instrumentation, and susceptibility to countermeasures <ref type="bibr" target="#b0">[1]</ref>. These drawbacks have motivated researchers to explore alternative approaches that focus on cognitive and behavioral indicators <ref type="bibr" target="#b4">[5]</ref>. Several techniques based on question-answering have been developed to detect deception. Notable among these are the Pattern Variation Method to Detect Lie using Artificial Neural Network (PVMANN) and the Pattern Variation Method with Modified Weights to Detect Lie using Artificial Neural Network (PVMMWANN) <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. Both methods are efficient, requiring only a personal computer, and involve interviewing individuals in a relaxed, tension-free environment. The same questions are repeated daily over several days to detect inconsistencies, under the assumption that liars would struggle to maintain consistency over time. However, research has shown that liars can exhibit consistency levels comparable to truthful individuals, even with extended intervals between interviews <ref type="bibr" target="#b8">[9]</ref>. These findings suggest that repetitive questioning alone may not be sufficient to detect deception, especially for well-prepared individuals <ref type="bibr" target="#b13">[14]</ref>. To address this limitation, strategically designed questions have been proposed, making it difficult for liars to maintain fabricated answers while remaining straightforward for truthful individuals <ref type="bibr" target="#b14">[15]</ref>. This approach leverages cognitive load and behavioral variability to improve the accuracy of deception detection <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b15">16]</ref>. Some studies have explored unconventional tools for deception detection, such as analyzing cognitive tasks like drawing to reveal inconsistencies in liar individuals <ref type="bibr" target="#b16">[17]</ref>. Dialog-based systems have also been explored for deception detection, leveraging natural language processing to identify linguistic cues <ref type="bibr" target="#b17">[18]</ref>. Machine learning techniques have also been extensively explored in this domain, particularly for analyzing textual and behavioral data <ref type="bibr" target="#b18">[19]</ref>. Early machine learning models, such as support vector machines and decision trees, relied heavily on handcrafted features like n-grams and sentiment analysis to classify responses as truthful or deceptive <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21,</ref><ref type="bibr" target="#b21">22]</ref>. Although these methods demonstrated potential, their scalability and performance were limited when applied to large or unstructured datasets <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>. Deep learning progress has greatly enhanced the field by enabling the analysis of complex patterns in multimodal data. Methods like the hybrid CNN-LSTM architecture proposed by Mendels et al. (2017), the multimodal neural network developed by <ref type="bibr" target="#b18">Krishnamurthy et al. (2018)</ref> and the language-guided deep learning model explored by <ref type="bibr" target="#b2">Wang et al (2020)</ref> achieved promising results by integrating audio, text, and visual features <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b21">22]</ref>. While effective, these approaches often require multimodal datasets and computational resources, making them less practical for general use. Existing methods often face challenges related to scalability, data requirements, and generalizability <ref type="bibr" target="#b1">[2]</ref>. By leveraging behavioral metrics and deep learning techniques, the proposed approach addresses these limitations, contributing to the advancement of lie detection research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3.">Objective</head><p>This research aims to develop a robust deep learning-based framework for detecting deception by integrating textual and behavioral features. The key objectives of this study are:</p><p>• Incorporating Behavioral Metrics: Utilize response consistency, delay, and unexpected question reactions derived from SIT to detect cognitive strain indicative of deception <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b23">24]</ref>. • Leveraging Deep Learning: Design a Bi-LSTM-based architecture to process both behavioral and textual features, enhancing detection accuracy <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b25">26]</ref>. • Evaluating Model Performance: Compare the proposed methodology with existing approaches using accuracy, precision, recall, and F1-score as metrics <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b19">20]</ref>. • Real-World Applicability: Demonstrate the practicality of the methodology for applications such as recruitment, security assessments, and criminal investigations <ref type="bibr" target="#b26">[27]</ref>. By achieving these objectives, this work bridges the gap between traditional behavioral analysis and modern deep learning techniques, providing a scalable, efficient, and effective solution for deception detection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head><p>This study employs a deep learning-based approach to detect lie by integrating textual and behavioural features derived from multiple datasets. The methodology includes data collection, feature engineering and the design of hybrid Bi-LSTM model that leverages the complementary strength of behavioural and linguistic analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Data Collection and Preparation</head><p>The model is trained and evaluated on a combined dataset consisting of three sources. The LIAR dataset <ref type="bibr" target="#b21">[22]</ref>, the Deceptive Opinion Spam dataset <ref type="bibr" target="#b19">[20]</ref> and a custom Strategic Interview Technique (SIT) dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.1.">Strategic Interview Technique (SIT) Dataset</head><p>SIT involves strategically farmed and repeated questions to elicit consistent or deceptive behavioral patterns, where interviewees are subtly challenged to maintain consistency. Table <ref type="table" target="#tab_0">1</ref> shows sample interview questions, demonstrating both the repetitive nature and variations in phrasing used to encourage cognitive consistency. The variety of topics and rephrasing within each category are aimed at detecting changes in response consistency <ref type="bibr" target="#b27">[28]</ref>. Collected responses include-Yes/No answers, Response time and Cognitive matrix, Reaction to unexpected or cognitively challenging questions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.2.">LIAR Dataset</head><p>• Includes 12,836 labelled political statements with metadata (e.g., speaker, context, credibility).</p><p>• Truthfulness levels: true, mostly true, half true, false, barely true and pants on fire.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.3.">Deceptive Opinion Dataset</head><p>• Provides 1600 truthful and deceptive hotel reviews, categorized into positive and negative statements. • Features: Textual content, word count and sentiment polarity.</p><p>The datasets were combined, as shown in Table <ref type="table" target="#tab_1">2</ref>, to train the model effectively. The combination of these datasets provides several advantages, enhancing the robustness and generalizability of the study. The benefits are- and unstructured text data (e.g., liar statements, consumer reviews).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Feature Engineering and Preprocessing</head><p>The success of any deep learning-based deception detection system relies heavily on the quality of features extracted from the input data. This study integrates three distinct datasets-responses collected via the Strategic Interview Technique (SIT), the LIAR dataset, and the Deceptive Opinion Spam dataset. Each dataset undergoes tailored preprocessing and feature engineering to ensure consistency and compatibility for training a unified deep learning model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.1.">Behavioral Features (SIT)</head><p>• Response Consistency (C): The calculation of the Response Consistency is shown in Equation <ref type="formula">1</ref>:</p><formula xml:id="formula_0">𝐶 = ∑︀ 𝑛 𝑖=1 (𝑥 𝑖 − x) 2 𝑛 (1)</formula><p>Where, 𝑛 is the number of repeated responses, 𝑥 𝑖 is the response at instance 𝑖, x is the mean response. A higher 𝐶 may indicate potential deception. • Response Delay (R): The formula to calculate the Response Delay is shown in the Equation <ref type="formula" target="#formula_1">2</ref>:</p><formula xml:id="formula_1">𝑅 = ∑︀ 𝑛 𝑖=1 𝑡 𝑖 𝑛<label>(2)</label></formula><p>Where 𝑡 𝑖 is the time taken for each response. Increased 𝑅 suggests cognitive processing, often associated with deception. • Unexpected Question Score (UQS): An Unexpected Question Score (UQS) is calculated by averaging response variances to unexpected questions. Higher UQS values indicate spontaneous inconsistencies, a potential indicator of deception.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2.">Textual Features (LIAR and Deceptive Opinion Spam Datasets)</head><p>Both the LIAR and Deceptive Opinion Spam datasets provide textual data labeled as truthful or deceptive.</p><p>The following preprocessing steps are applied to ensure the extraction of meaningful semantic and syntactic features:</p><p>• Text Cleaning:</p><p>-Removal of punctuation, special characters, and stopwords.</p><p>-Conversion to lowercase to standardize input. • Tokenization and Lemmatization:</p><p>-Tokenization splits sentences into individual words.</p><p>-Lemmatization reduces words to their base or dictionary forms, ensuring consistency.</p><p>• Word Embedding:</p><p>-Represent words as dense vectors using pretrained embeddings such as BERT. These embeddings capture semantic and syntactic relationships between words <ref type="bibr" target="#b28">[29]</ref>. -BERT embeddings are particularly beneficial as they consider the context of words in a sentence, providing a nuanced representation of deceptive statements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.3.">Sentiment and Metadata Features</head><p>• Sentiment Analysis: Sentiment polarity scores (positive, negative, or neutral) are extracted using natural language processing (NLP) libraries like VADER <ref type="bibr" target="#b29">[30]</ref>. These scores are particularly relevant for the Deceptive Opinion Spam dataset, as deceptive reviews often exhibit exaggerated sentiment. • Metadata Encoding: Speaker credibility, political affiliation, and context from the LIAR dataset are encoded numerically using one-hot encoding or embedding layers, depending on the deep learning model's architecture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.4.">Data Normalization and Augmentation</head><p>To ensure uniformity and enhance model generalizability, the following techniques are used:</p><p>• Normalization: Numerical features (e.g., C, R, UQS) are normalized using Min-Max scaling <ref type="bibr" target="#b30">[31]</ref>, as shown in the Equation <ref type="formula" target="#formula_2">3</ref>:</p><formula xml:id="formula_2">𝑥 ′ = 𝑥 − min(𝑥) max(𝑥) − min(𝑥)<label>(3)</label></formula><p>Where, 𝑥 represents the original feature value, min(𝑥) and max(𝑥) are minimum and maximum values of the feature in the dataset and 𝑥 ′ is the normalized value. • Data Augmentation: Synthetic samples are generated for the SIT dataset by simulating variations in response delay and consistency, ensuring balance between truthful and deceptive classes. Textual data augmentation includes synonym replacement and backtranslation techniques, particularly for small subsets of the Deceptive Opinion Spam dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Deep Learning Model Architecture</head><p>The deep learning framework developed in this study is designed to analyze both textual and behavioral features, leveraging their complementary nature to detect deceptive tendencies with high accuracy. The model employs a multi-branch architecture that processes distinct feature types-textual embeddings and behavioral metrics-through specialized neural network layers, culminating in a unified classification output.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.1.">Overview of the Architecture</head><p>• A textual branch that processes semantic information using a Bi-directional Long Short-Term Memory (Bi-LSTM) network. • A behavioral branch that analyzes numerical features using fully connected dense layers.</p><p>These branches are integrated through a concatenation layer, followed by a classification layer that predicts the likelihood of truthfulness or deception. The modular nature of this architecture allows seamless incorporation of additional feature types, such as metadata or audio signals, if required.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.2.">Input Layers</head><p>The input to the model consists of:</p><p>• Textual Data: Preprocessed text embeddings from the LIAR and Deceptive Opinion Spam datasets. Embeddings are generated using pretrained BERT models, capturing both semantic and contextual nuances. • Textual Data: Numerical features derived from SIT responses, including: Response Consistency (C), Response Delay (R), Unexpected Question Reaction (UQS).</p><p>Each input type is normalized and scaled to ensure compatibility with the subsequent layers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.3.">Textual Feature Processing (Bi-LSTM)</head><p>The textual branch employs a Bi-LSTM network to capture sequential dependencies and contextual relationships in the input text:</p><p>• Embedding Layer: Pretrained BERT embeddings are used to represent each word as a dense vector. These embeddings are fine-tuned during training to align with the deception detection task. • Bi-LSTM Layer: The Bi-LSTM network processes the sequence of embeddings, capturing both forward and backward temporal dependencies. The hidden states of the Bi-LSTM encode contextual relationships between words, which are crucial for detecting nuanced patterns of deception. • Dropout Layer: A dropout rate of 0.3 is applied to prevent overfitting, ensuring robust generalization to unseen data.</p><p>The output of the Bi-LSTM layer is a fixed-dimensional vector representing the entire input text, which is passed to the concatenation layer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.4.">Behavioral Feature Processing (Dense Layers)</head><p>The behavioral branch processes numerical features through fully connected dense layers:</p><p>• Input Layer: Accepts normalized behavioral features (C, R, and UQS) as input.</p><p>• Dense Layers: Two fully connected layers, each with 128 and 64 neurons, apply non-linear transformations using ReLU activation. These layers enable the network to learn complex relationships among behavioral metrics. • Dropout Layer: A dropout rate of 0.2 is applied after each dense layer to reduce overfitting.</p><p>The final output of the behavioral branch is a feature vector summarizing patterns in the behavioral data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.5.">Integration (Concatenation Layer)</head><p>The outputs of the Bi-LSTM and dense layers are concatenated into a unified feature vector. This layer enables the model to jointly analyze textual and behavioral patterns, leveraging their complementary strengths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.6.">Classification Layer</head><p>The concatenated feature vector is passed through a series of dense layers for classification:</p><p>• Dense Layers: Two fully connected layers with 64 and 32 neurons, using ReLU activation.</p><p>• Output Layer: A softmax layer outputs probabilities for two classes: truthful and liar.</p><p>The final prediction is based on the class with the highest probability. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.7.">Loss Function and Optimization</head><p>The model is trained to minimize the Categorical Cross-Entropy Loss, shown in Equation <ref type="formula" target="#formula_3">4</ref>:</p><formula xml:id="formula_3">𝐿 = − 1 𝑁 𝑁 ∑︁ 𝑖=1 𝐾 ∑︁ 𝑗=1 𝑦 𝑖𝑗 log(ŷ 𝑖𝑗 )<label>(4)</label></formula><p>Where 𝑦 𝑖𝑗 is the true label for class 𝑗 of sample 𝑖, ŷ𝑖𝑗 is the predicted probability for class 𝑗, 𝑁 is the number of samples, and 𝐾 is the number of classes (truthful and liar). The textbfAdam optimizer is used for efficient and adaptive gradient updates, with an initial learning rate of 10 −4 <ref type="bibr" target="#b31">[32]</ref>. Early stopping is applied during training to prevent overfitting <ref type="bibr" target="#b32">[33]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.8.">Model Training and Validation</head><p>The combined dataset was divided into training (80%), validation (10%) and test (10%) sets. Cross validation was employed to ensure generalizability across diverse data domains. Training Set is Used to update the model weights., Validation Set Monitors during training for early stopping and Test Set Evaluates the model's generalization performance on unseen data. A batch size of 32 is used, with training conducted over 50 epochs or until early stopping criteria are met. Performance is measured using accuracy, precision, recall, and F1 score. The overall workflow is illustrated in Figure <ref type="figure" target="#fig_0">1</ref>, which provides a step-by-step depiction of how the inputs are processed, features are extracted, and predictions are made.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Result Analysis and Discussion</head><p>The proposed methodology integrates cognitive principles with deep learning models to effectively classify truthful and deceptive responses. The evaluation is based on behavioral features from SIT and textual patterns from LIAR and Deceptive Opinion Spam datasets. The model's performance is analyzed using key metrics, comparative studies, and visual aids, ensuring a comprehensive understanding of its capabilities.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Model Performance</head><p>The performance of the proposed model was evaluated using standard metrics: accuracy, precision, recall, and F1-score. The results are presented in Figure <ref type="figure" target="#fig_1">2</ref> and Table <ref type="table" target="#tab_2">3</ref>. The proposed model achieves high accuracy (89.5%) and recall (92.4%), outperforming traditional techniques. Its superior F1-score (88.9%) highlights its balanced capability in identifying deceptive behavior while minimizing false positives and negatives.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Comparison with Recent Works</head><p>The model's performance is compared with other state-of-the-art methods, as shown in Table <ref type="table" target="#tab_3">4</ref> and visualized in Figure <ref type="figure" target="#fig_2">3</ref>. The proposed methodology achieves the highest accuracy and recall among the compared models. The integration of SIT behavioral features with textual embeddings gives it a competitive advantage.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">ROC Curve Analysis</head><p>The model's discriminative capability is illustrated in Figure <ref type="figure" target="#fig_3">4</ref>, which depicts the Receiver Operating Characteristic (ROC) curve. The Area Under the Curve (AUC) value of 0.75 confirms the model's ability to reliably differentiate between truthful and deceptive responses.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Error Analysis</head><p>A detailed error analysis was conducted shown in Figure <ref type="figure" target="#fig_4">5</ref> to identify limitations:</p><p>• False Positives: Truthful responses misclassified as deceptive, often due to ambiguous or overly concise answers. • False Negatives: Deceptive responses misclassified as truthful, typically observed in rehearsed or highly consistent responses.</p><p>To mitigate these errors:</p><p>• Enhanced SIT Questions: Increase the variability and complexity of questions to induce greater cognitive load.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Comparison with Traditional Methods</head><p>As shown in the Table <ref type="table" target="#tab_5">5</ref>, the proposed model achieves significantly higher accuracy than traditional techniques, such as the Polygraph Test, while requiring less time and no specialized equipment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Discussion</head><p>The Refining SIT question design to increase variability and cognitive load, along with diversifying training datasets to include a broader demographic range, could further enhance model performance. Despite these challenges, the methodology's reliance on simple inputs and short test durations makes it cost-effective, scalable, and practical for real-world applications. This work establishes a strong foundation for future research. The integration of additional modalities, such as audio or video, could further improve the detection of deception and expand the applicability of this approach in domains such as security, recruitment, and criminal investigations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>In this study, an innovative approach was introduced for detecting an individual' tendency to lie by examining response patterns using an Artificial Neural Network (ANN) based on a Strategic Interview Technique (SIT). Unlike traditional lie detection methods that rely heavily on physiological responses or simplistic question-answering techniques, the proposed method analyzes subtle variations in answer consistency and response delay. The findings reveal that the ANN-based SIT model achieves a high accuracy rate of 89.5%, surpassing traditional methods such as Polygraph and previous ANN-based lie detection techniques like PVMANN and PVMMWANN. By reducing the dependency on specialized equipment and minimizing testing time, this model provides a practical and accessible alternative for lie detection, particularly in settings where traditional methods may not be feasible or affordable. The analysis of Consistency Score (CS) and Response Delay (RD) proved valuable in distinguishing between truthful and deceptive individuals. While truthful individuals typically exhibit stable patterns and shorter response times, deceptive individuals tend to show higher variability and delay, highlighting cognitive load differences. However, certain limitations, such as the potential for manipulation by highly trained individuals and emotional influence, suggest areas for future improvement. Integrating additional biometrics, enhancing cultural sensitivity in questioning, and exploring adaptive question models could further enhance accuracy and applicability. In conclusion, the proposed ANN-based SIT model demonstrates significant progress in the field of lie detection by combining cognitive science principles with machine learning techniques. The results underscore its potential for practical use in criminal investigations, security assessments, and personnel evaluations, contributing a reliable, cost-effective, and less intrusive alternative to traditional methods.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Block diagram of the proposed deep learning-based methodology for detecting deceptive behavior.</figDesc><graphic coords="7,213.29,65.61,168.70,259.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Performance Metrics of the Proposed Model.</figDesc><graphic coords="8,175.84,170.69,243.59,187.04" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Performance Comparison with Recent Works.</figDesc><graphic coords="9,117.64,65.61,360.00,216.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: ROC Curve for the Proposed Model.</figDesc><graphic coords="9,153.64,320.45,288.00,216.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Error Distribution for the Proposed Model.</figDesc><graphic coords="10,124.84,65.61,345.60,259.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Sample Strategic Interview Questions and Expected Response Patterns</figDesc><table><row><cell>Question No</cell><cell>Category</cell><cell>Sample Question</cell><cell>Type</cell></row><row><cell>1</cell><cell>Personal Detail</cell><cell>Are you currently employed?</cell><cell>Yes/No</cell></row><row><cell>2</cell><cell cols="3">Personal Detail Do you work for a private company? Yes/No</cell></row><row><cell>3</cell><cell>Finances</cell><cell cols="2">Do you have any outstanding loans? Yes/No</cell></row><row><cell>4</cell><cell>Finances</cell><cell>Are all your debts paid off?</cell><cell>Yes/No</cell></row><row><cell>5</cell><cell>Qualifications</cell><cell>Do you hold a graduate degree?</cell><cell>Yes/No</cell></row><row><cell>6</cell><cell>Qualifications</cell><cell cols="2">Did you complete any certifications? Yes/No</cell></row><row><cell>7</cell><cell>Lifestyle</cell><cell>Do you exercise regularly</cell><cell>Yes/No</cell></row><row><cell>8</cell><cell>Lifestyle</cell><cell cols="2">Have you been active this past week? Yes/No</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Combined Dataset for TrainingThe combined datasets expose the model to a wide variety of features, enabling it to detect deception in different scenarios.• Combination of these datasets facilitates a detailed error analysis to identify domain-specific challenges. • The integration ensures the model learns from both structured features (e.g., response consistency)</figDesc><table><row><cell>ID</cell><cell>Source</cell><cell>Textual Content</cell><cell cols="4">Consistency Score (C) Response Delay (R) Unexpected Question Score (UQS) Truthfulness Label</cell></row><row><cell>001</cell><cell>SIT</cell><cell>"Yes"</cell><cell>0.25</cell><cell>1.5</cell><cell>0.35</cell><cell>Truthful</cell></row><row><cell>002</cell><cell>LIAR</cell><cell>"The economy grew 5% last year"</cell><cell>N/A</cell><cell>N/A</cell><cell>N/A</cell><cell>Barely True</cell></row><row><cell cols="2">003 Deceptive Opinion Spam</cell><cell>"The hotel was amazing"</cell><cell>N/A</cell><cell>N/A</cell><cell>N/A</cell><cell>Deceptive</cell></row><row><cell>004</cell><cell>SIT</cell><cell>"No"</cell><cell>0.65</cell><cell>3.0</cell><cell>0.70</cell><cell>Deceptive</cell></row><row><cell>005</cell><cell>LIAR</cell><cell>"We reduced taxes by 20% in 2020"</cell><cell>N/A</cell><cell>N/A</cell><cell>N/A</cell><cell>Mostly True</cell></row><row><cell cols="3">006 Deceptive Opinion Spam "Terrible experience, won't stay again"</cell><cell>N/A</cell><cell>N/A</cell><cell>N/A</cell><cell>Truthful</cell></row><row><cell>007</cell><cell>SIT</cell><cell>"Yes"</cell><cell>0.40</cell><cell>2.2</cell><cell>0.55</cell><cell>Truthful</cell></row><row><cell>008</cell><cell>LIAR</cell><cell>"Crime rates are lower now than ever. "</cell><cell>N/A</cell><cell>N/A</cell><cell>N/A</cell><cell>False</cell></row><row><cell cols="2">009 Deceptive Opinion Spam</cell><cell>"Best place I've ever visited!"</cell><cell>N/A</cell><cell>N/A</cell><cell>N/A</cell><cell>Deceptive</cell></row><row><cell>010</cell><cell>SIT</cell><cell>"No"</cell><cell>0.50</cell><cell>2.8</cell><cell>0.60</cell><cell>Truthful</cell></row><row><cell cols="7">• The model is trained and tested across diverse domains including structured interviews, polit-</cell></row><row><cell></cell><cell cols="6">ical discourse and consumer opinions. This improves the model's adaptability to real-world</cell></row><row><cell></cell><cell>applications.</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>•</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Performance Metric of the Proposed Model</figDesc><table><row><cell>Metric</cell><cell>Value</cell></row><row><cell>Accuracy</cell><cell>89.5</cell></row><row><cell>Precision</cell><cell>85.7</cell></row><row><cell>Recall</cell><cell>92.4</cell></row><row><cell>F1-Score</cell><cell>88.9</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>Comparison with Recent Works</figDesc><table><row><cell>Method</cell><cell>Dataset</cell><cell>Model</cell><cell cols="4">Accuracy (%) Precision (%) Recall (%) F1-Score (%)</cell></row><row><cell>Mendels et al. (2017)</cell><cell>Proprietary Dataset (Audio + Text)</cell><cell>Hybrid Deep Learning (CNN + LSTM)</cell><cell>86.2</cell><cell>83.5</cell><cell>90.0</cell><cell>86.6</cell></row><row><cell>Krishnamurthy et al. (2018)</cell><cell>Real-Life Videos (Multimodal)</cell><cell>Multimodal Neural Model</cell><cell>88.1</cell><cell>84.2</cell><cell>91.0</cell><cell>87.5</cell></row><row><cell>Wang et al. (2020)</cell><cell>Custom Deception Dataset</cell><cell>Language-Guided Deep Learning</cell><cell>88.5</cell><cell>88.2</cell><cell>87.0</cell><cell>86.8</cell></row><row><cell>Proposed Model</cell><cell>SIT + LIAR + Deceptive Opinion Spam</cell><cell>Bi-LSTM + Dense Layers</cell><cell>89.5</cell><cell>85.7</cell><cell>92.4</cell><cell>88.9</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>•</head><label></label><figDesc>Augmented Training Data: Include more diverse examples of liar and truthful responses to reduce bias.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 5</head><label>5</label><figDesc>Comparison with Traditional Methods</figDesc><table><row><cell>Method</cell><cell cols="3">Accuracy (%) Avg. Test Duration Instrumentation Requirement</cell></row><row><cell>Polygraph Test</cell><cell>72.0</cell><cell>1.5 -4 hours</cell><cell>Specialized equipment</cell></row><row><cell>PVMANN</cell><cell>78.5</cell><cell>45 minutes</cell><cell>PC</cell></row><row><cell>PVMMWANN</cell><cell>82.3</cell><cell>40 minutes</cell><cell>PC</cell></row><row><cell>Proposed Method</cell><cell>89.5</cell><cell>30 minutes</cell><cell>PC</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head></head><label></label><figDesc>proposed methodology demonstrates significant advancements in deception detection by integrating behavioral features from the Strategic Interview Technique (SIT) with textual data from the LIAR and Deceptive Opinion Spam datasets. By leveraging deep learning architectures such as Bi-LSTM and dense layers, the model achieves robust performance, with an accuracy of 89.5%, precision of 85.7%, recall of 92.4%, and an F1-score of 88.9%. These metrics highlight the model's ability to effectively distinguish between truthful and deceptive responses across diverse datasets. A key strength of the methodology lies in its integration of cognitive and textual features, which provides a comprehensive analysis of deceptive behavior. Behavioral metrics such as consistency score, response delay, and unexpected question score add valuable insights into cognitive patterns that are difficult to capture using textual data alone. The inclusion of textual embeddings ensures that the model generalizes well across unstructured data, making it versatile for a range of applications. Compared to recent works, the proposed model outperforms methods such as the hybrid deep learning approach by Mendels et al.(2017), the multimodal neural model by<ref type="bibr" target="#b18">Krishnamurthy et al. (2018)</ref> and a language guided deep learning method by<ref type="bibr" target="#b2">Wang et al. (2020)</ref>. These improvements are attributed to the innovative use of SIT-derived behavioral metrics and the effective design of the deep learning architecture. Challenges remain in addressing false positives caused by ambiguous truthful responses and false negatives in rehearsed deceptive responses.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">• The LIAR dataset was accessed from LIAR.• The Deceptive Opinion Spam dataset was accessed from Deceptive Opinion Spam.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We sincerely thank the Civic Volunteers of Siliguri Metropolitan Police for their enthusiastic participation in the interview sessions, which were crucial to this study. We are especially grateful to Mr. Sunil Yadav, IPS, Assistant Commissioner Police (Traffic), for his invaluable support and for facilitating this research. We also extend our gratitude to the academic community of the University of North Bengal-students, scholars, and faculty members-for their cooperation, guidance, and valuable feedback, which greatly enriched the quality of our work.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>During the preparation of this work, the author(s) used OpenAI's ChatGPT to assist with grammar and spelling checks. After using this tool, the author(s) reviewed and edited the content as needed and take full responsibility for the publication's content.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">A Tremor in the Blood: Uses and Abuses of the Lie Detector</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Lykken</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Accuracy of Deception Judgments</title>
		<author>
			<persName><forename type="first">J</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Charles</forename><forename type="middle">F</forename><surname>Bond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Depaulo</surname></persName>
		</author>
		<idno type="DOI">10.1207/s15327957pspr1003_2</idno>
	</analytic>
	<monogr>
		<title level="j">Personality and Social Psychology Review</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="214" to="234" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Language-Guided Deep Learning for Deception Detection</title>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pan</surname></persName>
		</author>
		<idno type="DOI">10.1109/TKDE.2019.2892408</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="667" to="677" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Verbal and Nonverbal Communication of Deception</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zuckerman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Depaulo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rosenthal</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0065-2601(08)60369-7</idno>
	</analytic>
	<monogr>
		<title level="j">Advances in Experimental Social Psychology</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="1" to="59" />
			<date type="published" when="1981">1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Detecting Lies and Deceit: Pitfalls and Opportunities</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vrij</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>ISBN</publisher>
			<biblScope unit="page" from="978" to="0470516256" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Pattern Variation Method to Detect Lie Using Artificial Neural Network (PVMANN)</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chakraborty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Mandal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">National Conference on Computational Technologies</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="57" to="60" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Pattern Variation Method with Modified Weights to Detect Lie using Artificial Neural Network (PVMMWANN)</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Mandal</surname></persName>
		</author>
		<ptr target="https://iieta.org/sites/default/files/Journals/MMC/MMC_C/2016.77.1_04.pdf" />
	</analytic>
	<monogr>
		<title level="j">AMSE JOURNALS, Modelling C</title>
		<imprint>
			<biblScope unit="volume">77</biblScope>
			<biblScope unit="page" from="41" to="52" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A cognitive approach to lie detection: A meta-analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vrij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Fisher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Blank</surname></persName>
		</author>
		<idno type="DOI">10.1111/lcrp.12088</idno>
	</analytic>
	<monogr>
		<title level="j">Legal and Criminological Psychology</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="21" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Deception detection based on repeated interrogations</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Granhag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Stromwall</surname></persName>
		</author>
		<idno type="DOI">10.1348/135532501168217</idno>
	</analytic>
	<monogr>
		<title level="j">Legal and Criminological Psychology</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="85" to="101" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Strategic use of evidence during police interviews: When training to detect deception works</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hartwig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Granhag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Strömwall</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10979-006-9053-9</idno>
	</analytic>
	<monogr>
		<title level="j">Law and Human Behavior</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="233" to="247" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Strategic Interviewing to Detect Deception: Cues to Deception across Repeated Interviews</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename></persName>
		</author>
		<author>
			<persName><forename type="first">B.-G</forename><forename type="middle">I</forename></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2016.01702</idno>
	</analytic>
	<monogr>
		<title level="j">Front. Psychol</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<idno type="DOI">10.17226/10420</idno>
		<title level="m">The Polygraph and Lie Detection</title>
				<meeting><address><addrLine>Washington, DC</addrLine></address></meeting>
		<imprint>
			<publisher>The National Academies Press</publisher>
			<date type="published" when="2003">2003. 2003</date>
		</imprint>
		<respStmt>
			<orgName>National Research Council</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Evaluating Polygraph Data</title>
		<author>
			<persName><forename type="first">A</forename><surname>Slavkovic</surname></persName>
		</author>
		<idno type="DOI">10.1184/R1/6586598.v1</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Information-gathering vs. accusatory interview style: Its impact on deception detection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vrij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Fisher</surname></persName>
		</author>
		<idno type="DOI">10.1348/135532505X39099</idno>
	</analytic>
	<monogr>
		<title level="j">Legal and Criminological Psychology</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="15" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Strategic use of evidence during police interviews: When training to detect deception works</title>
		<author>
			<persName><forename type="first">M</forename><surname>Hartwig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Granhag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Strömwall</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10979-006-9053-9</idno>
	</analytic>
	<monogr>
		<title level="j">Law and Human Behavior</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="233" to="247" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">How do interviewers attempt to overcome suspects&apos; denials?</title>
		<author>
			<persName><forename type="first">D</forename><surname>Walsh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bull</surname></persName>
		</author>
		<idno type="DOI">10.1002/cbm.1829</idno>
	</analytic>
	<monogr>
		<title level="j">Criminal Behaviour and Mental Health</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="102" to="116" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Drawings as an innovative and effective lie detection tool</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vrij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Leal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Fisher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Warmelink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mann</surname></persName>
		</author>
		<idno type="DOI">10.1037/apl0000298</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Psychology</title>
		<imprint>
			<biblScope unit="volume">103</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="501" to="513" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">An Analysis Towards Dialogue-Based Deception Detection</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Tsunomori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Neubig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sakti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Toda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nakamura</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-19291-8_17</idno>
	</analytic>
	<monogr>
		<title level="m">Natural Language Dialog Systems and Intelligent Assistants</title>
				<meeting><address><addrLine>ChamSpringer, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="177" to="187" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Multimodal Neural Networks for Deception Detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Krishnamurthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Elhabian</surname></persName>
		</author>
		<idno type="DOI">10.1109/CVPRW.2018.00009</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition Workshops<address><addrLine>CVPRW</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Finding Deceptive Opinion Spam by Any Stretch of the Imagination</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cardie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Hancock</surname></persName>
		</author>
		<idno type="DOI">10.3115/2002472.2002512</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011)</title>
				<meeting>the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT 2011)</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="309" to="319" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Automatic Detection of Deception in Text: A Survey</title>
		<author>
			<persName><forename type="first">V</forename><surname>Perez-Rosas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kleinberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lefevre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mihalcea</surname></persName>
		</author>
		<idno type="DOI">10.1162/coli_a_00332</idno>
	</analytic>
	<monogr>
		<title level="j">Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1" to="25" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Liar, Liar Pants on Fire: A New Benchmark Dataset for Fake News Detection</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P17-2067</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)</title>
				<meeting>the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="422" to="426" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The detection of faked identity using unexpected questions and mouse dynamics</title>
		<author>
			<persName><forename type="first">M</forename><surname>Monaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gamberini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sartori</surname></persName>
		</author>
		<idno type="DOI">10.1371/journal.pone.017785</idno>
	</analytic>
	<monogr>
		<title level="j">PLoS ONE</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Front. Psycholfrom Evasive Answers and Inconsistencies across Repeated Interviews: A Study with Lay Respondents and Police Officers</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename></persName>
		</author>
		<author>
			<persName><forename type="first">B.-G</forename><forename type="middle">I</forename></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">N</forename></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2017.02207</idno>
		<imprint>
			<date type="published" when="2018-01-04">4 January 2018</date>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
	<note>Learning to Detect Deception</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Long Short-Term Memory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
		<idno type="DOI">10.1162/neco.1997.9.8.1735</idno>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Framewise Phoneme Classification with Bidirectional LSTM Networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Graves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
		<idno type="DOI">10.1109/IJCNN.2005.1556215</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN)</title>
				<meeting>the IEEE International Joint Conference on Neural Networks (IJCNN)</meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="2047" to="2052" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Accuracy of deception judgments</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">F</forename><surname>Bond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Depaulo</surname></persName>
		</author>
		<idno type="DOI">10.1207/s15327957pspr1003_2</idno>
	</analytic>
	<monogr>
		<title level="j">Personality and Social Psychology Review</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="214" to="234" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">The Cognitive Interview and Lie Detection: a New Magnifying Glass for Sherlock Holmes?</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">H</forename><surname>Ndez-Fernaude</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Alonso-Quecuty</surname></persName>
		</author>
		<idno>AID-ACP423&gt;3.0.CO;2-G</idno>
	</analytic>
	<monogr>
		<title level="j">APPLIED COGNITIVE PSYCHOLOGY</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="55" to="68" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1423</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Hutto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gilbert</surname></persName>
		</author>
		<idno type="DOI">10.1609/icwsm.v8i1.14550</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International AAAI Conference on Web and Social Media (ICWSM)</title>
				<meeting>the International AAAI Conference on Web and Social Media (ICWSM)</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="216" to="225" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Data Mining: Concepts and Techniques</title>
		<author>
			<persName><forename type="first">J</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kamber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pei</surname></persName>
		</author>
		<idno type="DOI">10.1016/C2009-0-61819-5</idno>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Morgan Kaufmann</publisher>
		</imprint>
	</monogr>
	<note>3rd ed</note>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Adam: A Method for Stochastic Optimization</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ba</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1412.6980</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Learning Representations (ICLR)</title>
				<meeting>the International Conference on Learning Representations (ICLR)</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Early Stopping-But When?</title>
		<author>
			<persName><forename type="first">L</forename><surname>Prechelt</surname></persName>
		</author>
		<idno type="DOI">10.1007/3-540-49430-8_3</idno>
	</analytic>
	<monogr>
		<title level="m">Neural Networks: Tricks of the Trade</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="55" to="69" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
