<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Deep Learning-Enhanced Detection of Lie Tendencies through Answer Pattern Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Debanil Chanda</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rakesh Kumar Mandal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science &amp; Technology, University of North Bengal)</institution>
          ,
          <addr-line>Raja Rammohanpur, Darjeeling, West Bengal 734013</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>8</fpage>
      <lpage>29</lpage>
      <abstract>
        <p>Detecting deceptive behavior is a critical challenge across various domains, including security, recruitment, and criminal investigations. Traditional methods, such as polygraphs, rely on physiological cues and often lack reliability and scalability. This study introduces a deep learning-based methodology that enhances deception detection through the analysis of answer patterns derived from a Strategic Interview Technique (SIT) and publicly available datasets, including LIAR and Deceptive Opinion Spam. By integrating cognitive behavioral features such as response consistency, delay, and reactions to unexpected questions with textual embeddings generated from Bi-LSTM networks, the model provides a comprehensive framework for detecting lie tendencies. The proposed method demonstrates exceptional performance, achieving an accuracy of 89.5% and an F1-score of 88.9%, outperforming recent studies in the field. Comparative analysis highlights its robustness in distinguishing truthful and deceptive responses across structured and unstructured data. Error analysis reveals areas for refinement, including addressing false positives caused by ambiguous responses and false negatives in rehearsed deception. The model's reliance on cost-efective and non-invasive features makes it scalable and practical for real-world applications. This work lays the foundation for integrating multimodal data, such as audio and video, to further enhance the efectiveness of deception detection systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Lie detection</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Answer Pattern Analysis</kwd>
        <kwd>Strategic Interview Technique (SIT)</kwd>
        <kwd>Behavioral Analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>1.1. Significance</title>
        <p>
          Lie detection is a critical area of research with applications in law enforcement, recruitment, and
psychological assessments. Traditional methods, such as polygraphs, rely on physiological signals but
face criticism for being invasive and susceptible to countermeasures and prone to manipulation [
          <xref ref-type="bibr" rid="ref1">1, 2</xref>
          ].
Advances in behavioral and linguistic analysis ofer a more robust alternative [ 3]. Answer patterns
during structured interviews, for instance, provide cognitive and behavioral cues that are valuable
for detecting deception [4, 5]. Some lie detection techniques rely on question-answering approaches,
such as the Pattern Variation Method to Detect Lie using Artificial Neural Network (PVMANN) and
the Pattern Variation Method with Modified Weights to Detect Lie using Artificial Neural Network
(PVMMWANN) [6, 7]. Both methods only require a personal computer, with suspects interviewed in a
tension-free environment. In these methods, the same questions are asked daily over several days. It
was believed that longer intervals between interviews might lead to inconsistencies in a liar’s answers,
as repeated interrogation could exploit cognitive strain [8]. However, studies have shown that liars can
be as consistent as truthful individuals, even with extended intervals between interviews. This creates
challenges for traditional repetitive questioning approaches, as liars may rehearse their answers to
appear truthful [9]. To address this, interviews should be conducted strategically, where repeating the
same answer becomes dificult for the suspect. Strategically framed questions make it easier to detect
deception without relying on visible negative signs. Such techniques enable the distinction between
truthful and dishonest individuals by introducing subtle variations in the questioning process, forcing
liars to engage cognitively in ways that reveal inconsistencies [10, 11].
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. Related Work</title>
        <p>
          Lie detection has been an essential focus of study, with traditional methods such as polygraph testing
relying on physiological signals like heart rate, skin conductance, and respiratory patterns [12, 13].
While widely used, these methods have several limitations, including invasiveness, high dependency
on instrumentation, and susceptibility to countermeasures [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. These drawbacks have motivated
researchers to explore alternative approaches that focus on cognitive and behavioral indicators [5].
Several techniques based on question-answering have been developed to detect deception. Notable
among these are the Pattern Variation Method to Detect Lie using Artificial Neural Network (PVMANN)
and the Pattern Variation Method with Modified Weights to Detect Lie using Artificial Neural Network
(PVMMWANN) [6, 7]. Both methods are eficient, requiring only a personal computer, and involve
interviewing individuals in a relaxed, tension-free environment. The same questions are repeated daily
over several days to detect inconsistencies, under the assumption that liars would struggle to maintain
consistency over time. However, research has shown that liars can exhibit consistency levels comparable
to truthful individuals, even with extended intervals between interviews [9]. These findings suggest
that repetitive questioning alone may not be suficient to detect deception, especially for well-prepared
individuals [14]. To address this limitation, strategically designed questions have been proposed,
making it dificult for liars to maintain fabricated answers while remaining straightforward for truthful
individuals [15]. This approach leverages cognitive load and behavioral variability to improve the
accuracy of deception detection [8, 16]. Some studies have explored unconventional tools for deception
detection, such as analyzing cognitive tasks like drawing to reveal inconsistencies in liar individuals
[17]. Dialog-based systems have also been explored for deception detection, leveraging natural language
processing to identify linguistic cues [18]. Machine learning techniques have also been extensively
explored in this domain, particularly for analyzing textual and behavioral data [19]. Early machine
learning models, such as support vector machines and decision trees, relied heavily on handcrafted
features like n-grams and sentiment analysis to classify responses as truthful or deceptive [20, 21, 22].
Although these methods demonstrated potential, their scalability and performance were limited when
applied to large or unstructured datasets [20, 21]. Deep learning progress has greatly enhanced the
ifeld by enabling the analysis of complex patterns in multimodal data. Methods like the hybrid
CNNLSTM architecture proposed by Mendels et al. (2017), the multimodal neural network developed by
Krishnamurthy et al. (2018) and the language-guided deep learning model explored by Wang et al
(2020) achieved promising results by integrating audio, text, and visual features [19, 22]. While efective,
these approaches often require multimodal datasets and computational resources, making them less
practical for general use. Existing methods often face challenges related to scalability, data requirements,
and generalizability [2]. By leveraging behavioral metrics and deep learning techniques, the proposed
approach addresses these limitations, contributing to the advancement of lie detection research.
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>1.3. Objective</title>
        <p>
          This research aims to develop a robust deep learning-based framework for detecting deception by
integrating textual and behavioral features. The key objectives of this study are:
• Incorporating Behavioral Metrics: Utilize response consistency, delay, and unexpected
question reactions derived from SIT to detect cognitive strain indicative of deception [
          <xref ref-type="bibr" rid="ref2 ref3">23, 24</xref>
          ].
• Leveraging Deep Learning: Design a Bi-LSTM-based architecture to process both behavioral
and textual features, enhancing detection accuracy [
          <xref ref-type="bibr" rid="ref4 ref5">25, 26</xref>
          ].
• Evaluating Model Performance: Compare the proposed methodology with existing approaches
using accuracy, precision, recall, and F1-score as metrics [3, 19, 20].
• Real-World Applicability: Demonstrate the practicality of the methodology for applications
such as recruitment, security assessments, and criminal investigations [
          <xref ref-type="bibr" rid="ref6">27</xref>
          ].
        </p>
        <p>By achieving these objectives, this work bridges the gap between traditional behavioral analysis and
modern deep learning techniques, providing a scalable, eficient, and efective solution for deception
detection.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>This study employs a deep learning-based approach to detect lie by integrating textual and behavioural
features derived from multiple datasets. The methodology includes data collection, feature engineering
and the design of hybrid Bi-LSTM model that leverages the complementary strength of behavioural
and linguistic analysis.</p>
      <sec id="sec-2-1">
        <title>2.1. Data Collection and Preparation</title>
        <p>The model is trained and evaluated on a combined dataset consisting of three sources. The LIAR dataset
[22], the Deceptive Opinion Spam dataset [20] and a custom Strategic Interview Technique (SIT)
dataset.</p>
        <sec id="sec-2-1-1">
          <title>2.1.1. Strategic Interview Technique (SIT) Dataset</title>
          <p>
            SIT involves strategically farmed and repeated questions to elicit consistent or deceptive behavioral
patterns, where interviewees are subtly challenged to maintain consistency. Table 1 shows sample
interview questions, demonstrating both the repetitive nature and variations in phrasing used to
encourage cognitive consistency. The variety of topics and rephrasing within each category are aimed at
detecting changes in response consistency [
            <xref ref-type="bibr" rid="ref7">28</xref>
            ]. Collected responses include- Yes/No answers, Response
time and Cognitive matrix, Reaction to unexpected or cognitively challenging questions.
2.1.2. LIAR Dataset
          </p>
        </sec>
        <sec id="sec-2-1-2">
          <title>2.1.3. Deceptive Opinion Dataset</title>
          <p>• Includes 12,836 labelled political statements with metadata (e.g., speaker, context, credibility).
• Truthfulness levels: true, mostly true, half true, false, barely true and pants on fire.
• Provides 1600 truthful and deceptive hotel reviews, categorized into positive and negative
statements.</p>
          <p>• Features: Textual content, word count and sentiment polarity.</p>
          <p>The datasets were combined, as shown in Table 2, to train the model efectively. The combination of
these datasets provides several advantages, enhancing the robustness and generalizability of the study.
The benefits
are</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Feature Engineering and Preprocessing</title>
        <p>The success of any deep learning-based deception detection system relies heavily on the quality of
features extracted from the input data. This study integrates three distinct datasets—responses collected
via the Strategic Interview Technique (SIT), the LIAR dataset, and the Deceptive Opinion Spam dataset.
Each dataset undergoes tailored preprocessing and feature engineering to ensure consistency and
compatibility for training a unified deep learning model.</p>
        <sec id="sec-2-2-1">
          <title>2.2.1. Behavioral Features (SIT)</title>
          <p>• Response Consistency (C): The calculation of the Response Consistency is shown in Equation
1:
 = ∑︀=1( − ¯)2 (1)

Where,  is the number of repeated responses,  is the response at instance , ¯ is the mean
response. A higher  may indicate potential deception.
• Response Delay (R): The formula to calculate the Response Delay is shown in the Equation 2:
 =
∑︀=1 

(2)
Where  is the time taken for each response. Increased  suggests cognitive processing, often
associated with deception.
• Unexpected Question Score (UQS): An Unexpected Question Score (UQS) is calculated by
averaging response variances to unexpected questions. Higher UQS values indicate spontaneous
inconsistencies, a potential indicator of deception.</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.2.2. Textual Features (LIAR and Deceptive Opinion Spam Datasets)</title>
          <p>Both the LIAR and Deceptive Opinion Spam datasets provide textual data labeled as truthful or deceptive.
The following preprocessing steps are applied to ensure the extraction of meaningful semantic and
syntactic features:
• Text Cleaning:
– Removal of punctuation, special characters, and stopwords.</p>
          <p>– Conversion to lowercase to standardize input.
• Tokenization and Lemmatization:
– Tokenization splits sentences into individual words.</p>
          <p>
            – Lemmatization reduces words to their base or dictionary forms, ensuring consistency.
• Word Embedding:
– Represent words as dense vectors using pretrained embeddings such as BERT. These
embeddings capture semantic and syntactic relationships between words [
            <xref ref-type="bibr" rid="ref8">29</xref>
            ].
– BERT embeddings are particularly beneficial as they consider the context of words in a
sentence, providing a nuanced representation of deceptive statements.
2.2.3. Sentiment and Metadata Features
• Sentiment Analysis: Sentiment polarity scores (positive, negative, or neutral) are extracted
using natural language processing (NLP) libraries like VADER [
            <xref ref-type="bibr" rid="ref9">30</xref>
            ]. These scores are particularly
relevant for the Deceptive Opinion Spam dataset, as deceptive reviews often exhibit exaggerated
sentiment.
• Metadata Encoding: Speaker credibility, political afiliation, and context from the LIAR dataset
are encoded numerically using one-hot encoding or embedding layers, depending on the deep
learning model’s architecture.
          </p>
        </sec>
        <sec id="sec-2-2-3">
          <title>2.2.4. Data Normalization and Augmentation</title>
          <p>
            To ensure uniformity and enhance model generalizability, the following techniques are used:
• Normalization: Numerical features (e.g., C, R, UQS) are normalized using Min-Max scaling [
            <xref ref-type="bibr" rid="ref10">31</xref>
            ],
as shown in the Equation 3:
′ =  − min() (3)
          </p>
          <p>max() − min()
Where,  represents the original feature value, min() and max() are minimum and maximum
values of the feature in the dataset and ′ is the normalized value.
• Data Augmentation: Synthetic samples are generated for the SIT dataset by simulating
variations in response delay and consistency, ensuring balance between truthful and deceptive classes.
Textual data augmentation includes synonym replacement and backtranslation techniques,
particularly for small subsets of the Deceptive Opinion Spam dataset.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Deep Learning Model Architecture</title>
        <p>The deep learning framework developed in this study is designed to analyze both textual and behavioral
features, leveraging their complementary nature to detect deceptive tendencies with high accuracy. The
model employs a multi-branch architecture that processes distinct feature types—textual embeddings and
behavioral metrics—through specialized neural network layers, culminating in a unified classification
output.</p>
        <sec id="sec-2-3-1">
          <title>2.3.1. Overview of the Architecture</title>
          <p>• A textual branch that processes semantic information using a Bi-directional Long Short-Term</p>
          <p>Memory (Bi-LSTM) network.</p>
          <p>• A behavioral branch that analyzes numerical features using fully connected dense layers.
These branches are integrated through a concatenation layer, followed by a classification layer that
predicts the likelihood of truthfulness or deception. The modular nature of this architecture allows
seamless incorporation of additional feature types, such as metadata or audio signals, if required.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>2.3.2. Input Layers</title>
          <p>The input to the model consists of:
• Textual Data: Preprocessed text embeddings from the LIAR and Deceptive Opinion Spam
datasets. Embeddings are generated using pretrained BERT models, capturing both semantic and
contextual nuances.
• Textual Data: Numerical features derived from SIT responses, including: Response Consistency
(C), Response Delay (R), Unexpected Question Reaction (UQS).</p>
          <p>Each input type is normalized and scaled to ensure compatibility with the subsequent layers.</p>
        </sec>
        <sec id="sec-2-3-3">
          <title>2.3.3. Textual Feature Processing (Bi-LSTM)</title>
          <p>The textual branch employs a Bi-LSTM network to capture sequential dependencies and contextual
relationships in the input text:
• Embedding Layer: Pretrained BERT embeddings are used to represent each word as a dense
vector. These embeddings are fine-tuned during training to align with the deception detection
task.
• Bi-LSTM Layer: The Bi-LSTM network processes the sequence of embeddings, capturing
both forward and backward temporal dependencies. The hidden states of the Bi-LSTM encode
contextual relationships between words, which are crucial for detecting nuanced patterns of
deception.
• Dropout Layer: A dropout rate of 0.3 is applied to prevent overfitting, ensuring robust
generalization to unseen data.</p>
          <p>The output of the Bi-LSTM layer is a fixed-dimensional vector representing the entire input text, which
is passed to the concatenation layer.</p>
        </sec>
        <sec id="sec-2-3-4">
          <title>2.3.4. Behavioral Feature Processing (Dense Layers)</title>
          <p>The behavioral branch processes numerical features through fully connected dense layers:
• Input Layer: Accepts normalized behavioral features (C, R, and UQS) as input.
• Dense Layers: Two fully connected layers, each with 128 and 64 neurons, apply non-linear
transformations using ReLU activation. These layers enable the network to learn complex
relationships among behavioral metrics.</p>
          <p>• Dropout Layer: A dropout rate of 0.2 is applied after each dense layer to reduce overfitting.
The final output of the behavioral branch is a feature vector summarizing patterns in the behavioral
data.</p>
        </sec>
        <sec id="sec-2-3-5">
          <title>2.3.5. Integration (Concatenation Layer)</title>
          <p>The outputs of the Bi-LSTM and dense layers are concatenated into a unified feature vector. This layer
enables the model to jointly analyze textual and behavioral patterns, leveraging their complementary
strengths.</p>
        </sec>
        <sec id="sec-2-3-6">
          <title>2.3.6. Classification Layer</title>
          <p>The concatenated feature vector is passed through a series of dense layers for classification:
• Dense Layers: Two fully connected layers with 64 and 32 neurons, using ReLU activation.
• Output Layer: A softmax layer outputs probabilities for two classes: truthful and liar.
The final prediction is based on the class with the highest probability.</p>
        </sec>
        <sec id="sec-2-3-7">
          <title>2.3.7. Loss Function and Optimization</title>
          <p>
            The model is trained to minimize the Categorical Cross-Entropy Loss, shown in Equation 4:
 
1 ∑︁ ∑︁  log(ˆ )
 = − 
=1 =1
(4)
Where  is the true label for class  of sample , ˆ is the predicted probability for class ,  is the
number of samples, and  is the number of classes (truthful and liar). The textbfAdam optimizer is used
for eficient and adaptive gradient updates, with an initial learning rate of 10− 4 [
            <xref ref-type="bibr" rid="ref11">32</xref>
            ]. Early stopping is
applied during training to prevent overfitting [
            <xref ref-type="bibr" rid="ref12">33</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-3-8">
          <title>2.3.8. Model Training and Validation</title>
          <p>The combined dataset was divided into training (80%), validation (10%) and test (10%) sets. Cross
validation was employed to ensure generalizability across diverse data domains. Training Set is Used
to update the model weights., Validation Set Monitors during training for early stopping and Test
Set Evaluates the model’s generalization performance on unseen data. A batch size of 32 is used, with
training conducted over 50 epochs or until early stopping criteria are met. Performance is measured
using accuracy, precision, recall, and F1 score. The overall workflow is illustrated in Figure 1, which
provides a step-by-step depiction of how the inputs are processed, features are extracted, and predictions
are made.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Result Analysis and Discussion</title>
      <p>The proposed methodology integrates cognitive principles with deep learning models to efectively
classify truthful and deceptive responses. The evaluation is based on behavioral features from SIT and
textual patterns from LIAR and Deceptive Opinion Spam datasets. The model’s performance is analyzed
using key metrics, comparative studies, and visual aids, ensuring a comprehensive understanding of its
capabilities.</p>
      <sec id="sec-3-1">
        <title>3.1. Model Performance</title>
        <p>The performance of the proposed model was evaluated using standard metrics: accuracy, precision,
recall, and F1-score. The results are presented in Figure 2 and Table 3. The proposed model achieves
high accuracy (89.5%) and recall (92.4%), outperforming traditional techniques. Its superior F1-score
(88.9%) highlights its balanced capability in identifying deceptive behavior while minimizing false
positives and negatives.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Comparison with Recent Works</title>
        <p>The model’s performance is compared with other state-of-the-art methods, as shown in Table 4 and
visualized in Figure 3. The proposed methodology achieves the highest accuracy and recall among
the compared models. The integration of SIT behavioral features with textual embeddings gives it a
competitive advantage.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. ROC Curve Analysis</title>
        <p>The model’s discriminative capability is illustrated in Figure 4, which depicts the Receiver Operating
Characteristic (ROC) curve. The Area Under the Curve (AUC) value of 0.75 confirms the model’s
ability to reliably diferentiate between truthful and deceptive responses.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Error Analysis</title>
        <p>A detailed error analysis was conducted shown in Figure 5 to identify limitations:
• False Positives: Truthful responses misclassified as deceptive, often due to ambiguous or overly
concise answers.
• False Negatives: Deceptive responses misclassified as truthful, typically observed in rehearsed
or highly consistent responses.</p>
        <p>To mitigate these errors:
• Enhanced SIT Questions: Increase the variability and complexity of questions to induce greater
cognitive load.
• Augmented Training Data: Include more diverse examples of liar and truthful responses to
reduce bias.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Comparison with Traditional Methods</title>
        <p>As shown in the Table 5, the proposed model achieves significantly higher accuracy than traditional
techniques, such as the Polygraph Test, while requiring less time and no specialized equipment.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Discussion</title>
        <p>The proposed methodology demonstrates significant advancements in deception detection by integrating
behavioral features from the Strategic Interview Technique (SIT) with textual data from the LIAR and
Deceptive Opinion Spam datasets. By leveraging deep learning architectures such as Bi-LSTM and
dense layers, the model achieves robust performance, with an accuracy of 89.5%, precision of 85.7%,
recall of 92.4%, and an F1-score of 88.9%. These metrics highlight the model’s ability to efectively
distinguish between truthful and deceptive responses across diverse datasets. A key strength of the
methodology lies in its integration of cognitive and textual features, which provides a comprehensive
analysis of deceptive behavior. Behavioral metrics such as consistency score, response delay, and
unexpected question score add valuable insights into cognitive patterns that are dificult to capture
using textual data alone. The inclusion of textual embeddings ensures that the model generalizes well
across unstructured data, making it versatile for a range of applications. Compared to recent works,
the proposed model outperforms methods such as the hybrid deep learning approach by Mendels et
al. (2017), the multimodal neural model by Krishnamurthy et al. (2018) and a language guided deep
learning method by Wang et al. (2020). These improvements are attributed to the innovative use of
SIT-derived behavioral metrics and the efective design of the deep learning architecture. Challenges
remain in addressing false positives caused by ambiguous truthful responses and false negatives in
rehearsed deceptive responses. Refining SIT question design to increase variability and cognitive load,
along with diversifying training datasets to include a broader demographic range, could further enhance
model performance. Despite these challenges, the methodology’s reliance on simple inputs and short
test durations makes it cost-efective, scalable, and practical for real-world applications. This work
establishes a strong foundation for future research. The integration of additional modalities, such as
audio or video, could further improve the detection of deception and expand the applicability of this
approach in domains such as security, recruitment, and criminal investigations.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In this study, an innovative approach was introduced for detecting an individual’ tendency to lie by
examining response patterns using an Artificial Neural Network (ANN) based on a Strategic Interview
Technique (SIT). Unlike traditional lie detection methods that rely heavily on physiological responses
or simplistic question-answering techniques, the proposed method analyzes subtle variations in answer
consistency and response delay. The findings reveal that the ANN-based SIT model achieves a high
accuracy rate of 89.5%, surpassing traditional methods such as Polygraph and previous ANN-based lie
detection techniques like PVMANN and PVMMWANN. By reducing the dependency on specialized
equipment and minimizing testing time, this model provides a practical and accessible alternative for
lie detection, particularly in settings where traditional methods may not be feasible or afordable. The
analysis of Consistency Score (CS) and Response Delay (RD) proved valuable in distinguishing between
truthful and deceptive individuals. While truthful individuals typically exhibit stable patterns and
shorter response times, deceptive individuals tend to show higher variability and delay, highlighting
cognitive load diferences. However, certain limitations, such as the potential for manipulation by
highly trained individuals and emotional influence, suggest areas for future improvement. Integrating
additional biometrics, enhancing cultural sensitivity in questioning, and exploring adaptive question
models could further enhance accuracy and applicability. In conclusion, the proposed ANN-based SIT
model demonstrates significant progress in the field of lie detection by combining cognitive science
principles with machine learning techniques. The results underscore its potential for practical use
in criminal investigations, security assessments, and personnel evaluations, contributing a reliable,
cost-efective, and less intrusive alternative to traditional methods.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>We sincerely thank the Civic Volunteers of Siliguri Metropolitan Police for their enthusiastic participation
in the interview sessions, which were crucial to this study. We are especially grateful to Mr. Sunil Yadav,
IPS, Assistant Commissioner Police (Trafic), for his invaluable support and for facilitating this research.
We also extend our gratitude to the academic community of the University of North Bengal—students,
scholars, and faculty members—for their cooperation, guidance, and valuable feedback, which greatly
enriched the quality of our work.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used OpenAI’s ChatGPT to assist with grammar and
spelling checks. After using this tool, the author(s) reviewed and edited the content as needed and take
full responsibility for the publication’s content.
[2] J. Charles F. Bond and B. M. DePaulo, "Accuracy of Deception Judgments," Personality and Social</p>
      <p>Psychology Review, vol. 10, pp. 214-234, 2006, DOI: 10.1207/s15327957pspr1003_2.
[3] Wang, X., Peng, H., &amp; Pan, S. (2020). Language-Guided Deep Learning for Deception
Detection. IEEE Transactions on Knowledge and Data Engineering, 32(4), 667–677, DOI:
10.1109/TKDE.2019.2892408.
[4] Zuckerman, M., DePaulo, B. M., &amp; Rosenthal, R. (1981). Verbal and Nonverbal Communication of
Deception. Advances in Experimental Social Psychology, 14, 1-59. DOI:
10.1016/S0065-2601(08)603697.
[5] Vrij, A. (2008). Detecting Lies and Deceit: Pitfalls and Opportunities. John Wiley &amp; Sons, ISBN:
978-0470516256.
[6] Chakraborty, S., &amp; Mandal, R. K. (2016). Pattern Variation Method to Detect Lie Using Artificial</p>
      <p>Neural Network (PVMANN). National Conference on Computational Technologies, 57-60.
[7] Mandal, R. K. (2016). Pattern Variation Method with Modified Weights to Detect Lie using
Artiifcial Neural Network (PVMMWANN). AMSE JOURNALS, Modelling C, 77, 41-52, available at:
https://iieta.org/sites/default/files/Journals/MMC/MMC_C/2016.77.1_04.pdf.
[8] Vrij, A., Fisher, R. P., &amp; Blank, H. (2017). A cognitive approach to lie detection: A meta-analysis.</p>
      <p>Legal and Criminological Psychology, 22(1), 1–21, DOI: 10.1111/lcrp.12088.
[9] Granhag, P. A., &amp; Stromwall, L. A. (2001). Deception detection based on repeated interrogations.</p>
      <p>Legal and Criminological Psychology, 6, 85-101. DOI: 10.1348/135532501168217.
[10] Hartwig, M., Granhag, P. A., &amp; Strömwall, L. A. (2007). Strategic use of evidence during police
interviews: When training to detect deception works. Law and Human Behavior, 31(2), 233–247,
DOI: 10.1007/s10979-006-9053-9.
[11] M. J, B.-G. I, M. C, H. C and I. I, "Strategic Interviewing to Detect Deception: Cues to Deception
across Repeated Interviews," Front. Psychol, vol. 7, pp. 1-17, 2016, DOI: 10.3389/fpsyg.2016.01702.
[12] National Research Council 2003, The Polygraph and Lie Detection, Washington, DC: The National</p>
      <p>Academies Press, 2003, DOI: 10.17226/10420.
[13] A. Slavkovic, "Evaluating Polygraph Data," 2018, DOI: 10.1184/R1/6586598.v1.
[14] Vrij, A., Mann, S., &amp; Fisher, R. P. (2006). Information-gathering vs. accusatory interview style:
Its impact on deception detection. Legal and Criminological Psychology, 11(1), 1–15, DOI:
10.1348/135532505X39099.
[15] Hartwig, M., Granhag, P. A., &amp; Strömwall, L. A. (2007). Strategic use of evidence during police
interviews: When training to detect deception works. Law and Human Behavior, 31(2), 233–247,
DOI: 10.1007/s10979-006-9053-9.
[16] Walsh, D., &amp; Bull, R. (2012). How do interviewers attempt to overcome suspects’ denials? Criminal</p>
      <p>Behaviour and Mental Health, 22(2), 102–116, DOI: 10.1002/cbm.1829.
[17] Vrij, A., Leal, S., Fisher, R. P., Warmelink, L.,&amp; Mann, S. (2018). Drawings as an innovative and
efective lie detection tool. Journal of Applied Psychology, 103(5), 501-513. DOI: 10.1037/apl0000298.
[18] Y. Tsunomori, G. Neubig, S. Sakti, T. Toda and S. Nakamura, "An Analysis Towards Dialogue-Based
Deception Detection," in Natural Language Dialog Systems and Intelligent Assistants, Springer,
ChamSpringer, Cham, 2015, pp. 177-187, DOI: 10.1007/978-3-319-19291-8_17.
[19] Krishnamurthy, S., Ramesh, S., &amp; Elhabian, S. (2018). "Multimodal Neural Networks for Deception
Detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW 2018), 1–7, DOI: 10.1109/CVPRW.2018.00009.
[20] Ott, M., Choi, Y., Cardie, C., &amp; Hancock, J. T. (2011). "Finding Deceptive Opinion Spam by
Any Stretch of the Imagination." Proceedings of the 49th Annual Meeting of the Association
for Computational Linguistics: Human Language Technologies (ACL-HLT 2011), 309–319, DOI:
10.3115/2002472.2002512.
[21] Perez-Rosas, V., Kleinberg, B., Lefevre, A., &amp; Mihalcea, R. (2018). Automatic Detection of Deception
in Text: A Survey. Computational Linguistics, 44(4), 1–25, DOI: 10.1162/coli_a_00332.
[22] Wang, W. Y. (2017). "Liar, Liar Pants on Fire: A New Benchmark Dataset for Fake News Detection."
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL
2017), 422–426, DOI: 10.18653/v1/P17-2067.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Lykken</surname>
            ,
            <given-names>D. T.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>A Tremor in the Blood: Uses and Abuses of the Lie Detector</article-title>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Monaro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gamberini</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sartori</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>The detection of faked identity using unexpected questions and mouse dynamics</article-title>
          .
          <source>PLoS ONE</source>
          ,
          <volume>12</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . DOI:
          <volume>10</volume>
          .1371/journal.pone.
          <volume>017785</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M. J</given-names>
            ,
            <surname>M. C,</surname>
          </string-name>
          <article-title>B</article-title>
          .-G. I, S. N,
          <string-name>
            <surname>H. C and I. I</surname>
          </string-name>
          ,
          <article-title>"Learning to Detect Deception," Front. Psycholfrom Evasive Answers and Inconsistencies across Repeated Interviews: A Study with Lay Respondents and Police Oficers</article-title>
          ., vol.
          <volume>8</volume>
          , pp.
          <fpage>1</fpage>
          -
          <issue>17</issue>
          , 4
          <article-title>January 2018</article-title>
          , DOI: 10.3389/fpsyg.
          <year>2017</year>
          .
          <volume>02207</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Hochreiter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1997</year>
          ).
          <article-title>Long Short-Term Memory</article-title>
          .
          <source>Neural Computation</source>
          ,
          <volume>9</volume>
          (
          <issue>8</issue>
          ),
          <fpage>1735</fpage>
          -
          <lpage>1780</lpage>
          , DOI: 10.1162/neco.
          <year>1997</year>
          .
          <volume>9</volume>
          .8.1735.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Graves</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Framewise Phoneme Classification with Bidirectional LSTM Networks</article-title>
          .
          <source>Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN)</source>
          ,
          <year>2047</year>
          -
          <fpage>2052</fpage>
          , DOI: 10.1109/IJCNN.
          <year>2005</year>
          .
          <volume>1556215</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Bond</surname>
            ,
            <given-names>C. F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>DePaulo</surname>
            ,
            <given-names>B. M.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Accuracy of deception judgments</article-title>
          .
          <source>Personality and Social Psychology Review</source>
          ,
          <volume>10</volume>
          (
          <issue>3</issue>
          ),
          <fpage>214</fpage>
          -
          <lpage>234</lpage>
          . DOI:
          <volume>10</volume>
          .1207/s15327957pspr1003_
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Ndez-Fernaude</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Alonso-Quecuty</surname>
          </string-name>
          ,
          <article-title>"The Cognitive Interview and Lie Detection: a New Magnifying Glass for Sherlock Holmes?,"</article-title>
          <source>APPLIED COGNITIVE PSYCHOLOGY</source>
          , vol.
          <volume>11</volume>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>68</lpage>
          ,
          <year>1997</year>
          , DOI: 10.1002/(SICI)
          <fpage>1099</fpage>
          -
          <lpage>0720</lpage>
          (
          <issue>199702</issue>
          )11:
          <fpage>1</fpage>
          &lt;
          <fpage>55</fpage>
          :
          <article-title>:AID-ACP423&gt;3.0</article-title>
          .CO;2-G.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Devlin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>M. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Toutanova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</article-title>
          .
          <source>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . DOI:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          -1423.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Hutto</surname>
            ,
            <given-names>C. J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>"VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text."</article-title>
          <source>Proceedings of the International AAAI Conference on Web and Social Media (ICWSM)</source>
          ,
          <fpage>216</fpage>
          -
          <lpage>225</lpage>
          . DOI:
          <volume>10</volume>
          .1609/icwsm.v8i1.
          <fpage>14550</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Han</surname>
            ,
            <given-names>J</given-names>
          </string-name>
          .,
          <string-name>
            <surname>Kamber</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pei</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Data Mining: Concepts and Techniques (3rd ed</article-title>
          .). Morgan Kaufmann, DOI: 10.1016/C2009-0-61819-5.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [32]
          <string-name>
            <surname>Kingma</surname>
            ,
            <given-names>D. P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ba</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>"Adam: A Method for Stochastic Optimization."</article-title>
          <source>Proceedings of the International Conference on Learning Representations (ICLR)</source>
          .
          <source>DOI: 10.48550/arXiv.1412</source>
          .6980.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [33]
          <string-name>
            <surname>Prechelt</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>"Early Stopping-But When?" Neural Networks: Tricks of the Trade</article-title>
          (pp.
          <fpage>55</fpage>
          -
          <lpage>69</lpage>
          ). Springer. DOI:
          <volume>10</volume>
          .1007/3-540-49430-
          <issue>8</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>