<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Intimacy-aware Style Control in Dialog Response Generation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Takuto Miura</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kiyoaki Shirai</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Natthawut Kertkeidkachorn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Japan Advanced Institute of Science and Technology</institution>
          ,
          <addr-line>Nomi, Ishikawa, 9231211</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <fpage>5</fpage>
      <lpage>16</lpage>
      <abstract>
        <p>One of the crucial features in developing a dialog system is the choice of an appropriate speech style. This paper proposes a novel method for training a dialog model that can efectively control the style of a response. Specifically, the dialog model generates responses in a polite style when the user exhibits a low level of intimacy with the system and in a casual style when the user shows a high level of intimacy. Using a pre-trained language model (PLM) as a base dialog model, two loss functions are proposed for fine-tuning the PLM to generate responses in an appropriate style. One is the intimacy-aware word-level loss, which serves to ensure that the dialog model generates a polite or casual word when the user's level of intimacy is low or high. The other is the intimacy-aware sentence-level loss, which functions to increase the probability of the polite style of the generated utterance when the user's level of intimacy is low, and vice versa. The results of both automatic and human evaluations in the experiments demonstrate that the proposed method is more efective than the baselines in generating responses that align with the user's degree of intimacy. Furthermore, the proposed method exhibits comparable relevance and fluency to the PLM, indicating that the losses for the style control do not diminish the PLM's exceptional capacity for generating relevant and fluent responses.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Dialog System</kwd>
        <kwd>Speech Style</kwd>
        <kwd>Intimacy</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Dialog systems that freely chat with users on a wide range of topics have attracted a great deal of
attention in recent years [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. These systems are required to have comfortable conversations with
users and build long-term friendly relationships with them [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Humans adjust their speech style
according to their social relationships with their partners and/or the level of intimacy they share with
their partners [
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ]. Such behavior is referred to as a “style control” hereafter. One of the style
controls is to use both polite and casual styles depending on the relationship with the partner [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ].
Polite styles are often used in a conversation with a boss or a teacher, while casual styles are often
employed with a friend or a life partner. The style control should be considered in all conversations,
whether between humans or between humans and dialog systems [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The goal of this research is to develop a dialog system that flexibly controls speech styles according
to the user. Specifically, concerning the user’s intimacy with the dialog system, a response is generated
in a polite style when the user’s level of the intimacy is low, and in a casual style when the level of the
intimacy is high. To achieve this, we propose a method to incorporate knowledge necessary for style
control by fine-tuning a dialog model based on a pre-trained language model (PLM) that is capable of
generating a variety of responses consistent with the dialog context. A new loss function for fine-tuning
a dialog model is designed so that the model generates polite or casual responses when the level of the
intimacy is low or high, where the level of the intimacy is estimated from the user’s past utterances.</p>
      <p>The contributions of this paper are summarized as follows:
• We develop a dialog system that estimates the user’s level of the intimacy and controls the polite
and casual styles in generating responses accordingly.
• We propose an approach to incorporate knowledge for style control into an existing outstanding</p>
      <p>PLM-based dialog model.
• We demonstrate the efectiveness of the proposed method by both automatic and manual
evaluations.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Several methods have been developed for the generation of responses in a specific style. Niu and Bansal
defined the task of generating responses in a predefined style, such as polite or rude [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Gao et al.
proposed a method that shared the latent space between conversational and stylistic modeling and
developed a model that generated responses in a specified style while maintaining consistency with the
dialog context [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Zhu et al. extended Gao’s model so that the representation of content and style
was learned in diferent dimensions in latent space [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Zheng et al. proposed a method for automatic
construction of a dialog corpus consisting of utterances in a certain style, aiming to train a stylized
dialog model [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Specifically, they created a Seq2Seq model, which transforms sentences in an original
dialog corpus into ones in the specified style, using texts written in that style. Tsai et al. evaluated three
approaches to achieve both content and style fidelity: conditional learning, guided fine-tuning, and
guided decoding [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In conditional learning, special tokens about a style are added to the input of the
dialog model. In guided fine-tuning, a style of an utterance is classified, and the classification result is
added to the input of the dialog model. In guided decoding, the weights of the output of the decoder
are determined based on the result of the style classification model. Saha et al. proposed a multitask
learning method that predicts the speaker’s personality and intention when training a dialog model
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. This approach is designed to control the style following the predicted state of the speaker.
      </p>
      <p>
        Based on the aforementioned studies on maintaining a style in response generation, more recent
methods have been developed to add the capability of style control to a well-developed existing dialog
model. Sun et al. trained a dialog model using reinforcement learning, in which responses similar to
the ground-truth response and including style-related tokens got a higher reward [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The similarity
between responses was measured by the cosine similarity of the sentence embeddings, while the
stylespecific tokes were identified by the pre-trained classification model. Li et al. retrieved a sentence similar
to an utterance from a corpus of sentences written in a specific style and fed the retrieved sentence and
the utterance into a dialog model to generate a stylized response [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Since the retrieved style sentence
might be harmful to generate a response consistent with the context, they incorporated an encoder that
removed features not pertinent to the context, resulting in the extraction of style features only, into the
dialog model. This encoder was trained simultaneously with the dialog model. Yang et al. proposed loss
functions using a language model that generated sentences in the specified style and a classification
model that identified the style of a sentence for fine-tuning the PLM of the dialog model [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>Although the above previous studies can generate natural stylized responses, they are limited in their
ability to handle a single style. In contrast, our method enables the control of multiple styles according
to the user’s mental state.</p>
      <p>
        Several studies focused on the emotional state of the user during a dialog. Skowron et al. showed
that interactive expression of emotions in response to the user’s feelings can significantly contribute to
enhancing the enjoyment of the chat and the emotional connection between the user and the system
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. D’mello and Graesser developed an intelligent tutoring agent that responds empathetically or
motivationally according to the user’s cognitive and emotional states [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. This interactive agent
dramatically improved the learning eficacy of students with limited domain knowledge. Thus, controlling
the type of response of the system according to the user’s internal state exerts a considerable influence.
In this study, we deal with the user’s intimacy as the user’s internal state and the speech style as the
type of response.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Method</title>
      <p>The proposed method learns a dialog model that can adapt its style to either polite or casual by the
user’s level of intimacy, which is automatically estimated from the user’s historical utterances. Figure 1
shows an overview of the proposed method. Let us suppose that a dialog model generates a response to
a user for a given dialog context  = {1, 1, · · · , , }, where a system () and a user ( ) make
an utterance alternately. Figure 1 exemplifies the case of n = 4. The intimacy estimation model employs
the user’s previous utterances  = {1, · · · , } as input and determines whether the user’s level
of intimacy with the dialog system is high or low. The dialog model accepts the context  and the
estimated intimacy level as input and generates the response +1 in a casual style when the user’s
intimacy is high and in a polite style when the user’s intimacy is low.</p>
      <p>
        To learn the above dialog model, we extend STYLEDGPT [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], a model that consistently generates
responses in a specified style obtained by fine-tuning a PLM that can generate versatile responses. To
avoid impairing the exceptional response generation capability of the PLM, only the loss function in
finetuning is modified while the architecture of the PLM remains intact. Indeed, Yang et al. demonstrated
that STYLEDGPT performed well not only in its ability to produce utterances in the specified style but
also generate relevant and fluency responses [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. First, we provide an overview of STYLEDGPT in
subsection 3.1 and then describe the details of the proposed method in the succeeding subsections.
      </p>
      <sec id="sec-3-1">
        <title>3.1. STYLEDGPT</title>
        <p>
          STYLEDGPT employs DialoGPT [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] as a PLM and learns a model that consistently generates responses
in a specified style by fine-tuning it. DialoGPT is a Seq2Seq model based on GPT-2 [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] and has been
pre-trained with a large amount of dialog data.
        </p>
        <p>Word-Level Loss First, a style language model ( ) is trained in advance. Using a style corpus,
, consisting of only texts in a given style, GPT-2 is trained as an autoencoder, i.e., the same
sentence  ∈  is given as input and output for fine-tuning.</p>
        <p>Let  ( |) be a dialog model that returns a response  for a given dialog context . The loss is
computed for each dialog sample (,  ) in the training data .  is a sequence of words denoted
by  = {1, · · · , }. Let pY = {1 , · · · ,  } be the distribution of the predicted probability of the
next word given by the dialog model  ( |). Also, let p^Y = {^1 , · · · , ^ } be the distribution of
the probability of predicting the next word given by the style language model ( ) when the output
 of the dialog model is taken as input of the style language model. The distance between pY and p^Y
is defined as word-level loss  as in Eq. (1).</p>
        <p>= (pY||p^Y) d=ef ∑︁ ( ||^ )
=1
(1)
 is the Kullback-Leibler (KL) divergence of the two probability distributions. This loss causes pY
to approach p^Y, i.e., the dialog model is trained to produce utterances in the specified style.
Sentence-Level Loss First, a style discrimination model  (| ) is trained in advance. It identifies
whether a sentence  is written in a given style . This model is trained on a dataset that consists of
, a corpus of sequences written in the specific style, as positive samples, and , a general
dialog corpus, as negative samples.</p>
        <p>The loss is computed for each dialog sample (,  ) ∈ . Let ^ be a response generated by the
dialog model  ( |) for the input , and (|^ ) be the probability that the style of ^ is coincident
with the style . Then, the sentence-level loss  is defined as in Eq. (2).</p>
        <p>= − log (|^ )
This loss causes the dialog model  ( |) to produce utterances in the style .</p>
        <p>Negative Log-likelihood Loss The two losses mentioned above are designed to take into account
the style of a response. Fine-tuning a model with only these losses may result in a lack of consistency
between a context and a generated response. Therefore, the negative log-likelihood loss (Eq. (3)) is also
used, which is a common loss for training a dialog model. ( |) is the probability that the dialog
model generates a ground-truth response  from , where (,  ) is a sample in .</p>
        <p>= − ( |)</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Loss for Style Control</title>
        <p>We modify the word-level and sentence-level losses in STYLEDGPT to control the style of the response
according to the user’s level of intimacy.</p>
        <p>First, an intimacy estimation model  (|) is trained. This model predicts , the user’s level of
intimacy with a dialog system, given the user’s past  utterances () as input. In our model,  is
defined as either low or high. The intimacy estimation model is pre-trained using a dialog corpus
annotated with the speaker’s intimacy.</p>
        <p>To handle both polite and casual styles in response generation, two style corpora are prepared. One
is , which consists of polite-style sentences, and the other is , which consists of casual-style
sentences.</p>
        <p>Intimacy-aware Word-Level Loss First, the language models of the polite and casual styles, ( )
and ( ), are pre-trained using the corpora  and , respectively. Next, the word-level
loss of the polite style, , is computed as in Eq. (1). It evaluates how likely a response is to be
polite. Similarly, the word-level loss of the casual style, , is calculated. Finally, the intimacy-aware
word-level loss, , is defined as the weighted sum of these two losses (Eq. (4)). (=low|) and
(=high|) are the weights for  and , which are the probability that the user’s level of intimacy
is low and high, respectively.
(2)
(3)
(4)
 = (=low|) ·  + (=high|) · 
This loss is expected to encourage the generation of more polite tokens when intimacy is low and more
casual tokens when intimacy is high.</p>
        <p>Intimacy-aware Sentence-Level Loss First, we train a style discrimination model  ′(| ) that
identifies a style  of a sentence  , where  is either polite or casual. The style discrimination model is
trained in advance on training data where utterances in  are samples of the polite class and those
in  are samples of the casual class.</p>
        <p>Let ^ be the output of the dialog model  ( |) for a given context . Then, the style of ^
is identified by the style discrimination model and (=polite|^ ) and (=casual|^ ) are obtained.
Following the sentence-level loss of STYLEDGPT, the intimacy-aware sentence-level loss, , is defined
as the weighted sum of the logarithms of these probabilities, using the two probabilities (=low|)
and (=high|) as weights (Eq. (5)).</p>
        <p>= −( =low|) · log (=polite| ) − (=high|) · log (=casual| )
(5)
This loss is expected to learn the dialog model to generate utterances in the polite style when the
intimacy is low and in the casual style when the intimacy is high.</p>
        <p>Training Objective Eq. (6) shows the total loss, which is a weighted sum of the two losses concerning
a style ( and ) and a general response loss ().</p>
        <p>=   ·  +   ·  +   · 
(6)
 ,  , and   are hyperparameters representing the weight of each loss.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Additional Input</title>
        <p>In addition to incorporating the information of user’s intimacy into the loss functions, the user’s level
of intimacy is explicitly given in an input to the dialog model. Specifically, the level of intimacy is
identified by  (|), and then the intimacy label is added to the input as follows:
• When =low : &lt;l&gt; &lt;s&gt; context &lt;/s&gt;
• When =high : &lt;h&gt; &lt;s&gt; context &lt;/s&gt;
&lt;l&gt; and &lt;h&gt; are special tokens indicating the low and high intimacy classes, respectively. &lt;s&gt; and
&lt;/s&gt; are special tokens indicating the beginning and end of the dialog context. This additional input
allows the dialog model to generate responses in an appropriate style that matches the identified level
of intimacy.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Sampling and Ranking</title>
        <p>
          To enhance the ability of the dialog model to generate appropriately styled utterances, the
samplingand-rank decoding strategy [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] is employed as in STYLEDGPT. First, the dialog model generates 
candidate responses using top- sampling. Next, a style score and a content score are calculated for
each candidate response, , to assess the quality of . The candidate responses are then re-ranked by
the weighted sum of these scores, and the response with the highest score is chosen as the final output.
        </p>
        <p>The style score Score() is a weighted sum of the style probabilities of , as in Eq. (7). The
weights are the probabilities of the low and high intimacy predicted by the history of the user’s
utterances . A greater style score indicates that a response is generated in the polite (or casual) style
and the user’s level of intimacy is low (or high).</p>
        <p>Score() = (=low|) · (=polite|) + (=high|) · (=casual|)
(7)</p>
        <p>The content score Score() is defined as the probability that the dialog model  ( |) outputs
the response candidate  when the dialog context  is an input, as shown in Eq. (8). This score evaluates
the relevance of  to .</p>
        <p />
        <p>Score() =  (|)</p>
        <p>The final score Score() is defined as Eq. (9). The hyperparameter  determines the relative
weighting of the two scores.</p>
        <p>Score() = (1 − ) · Score() +  · Score()
(8)
(9)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. Datasets</title>
        <p>Dialog Corpus with Intimacy Level Our in-house dialog corpus annotated with intimacy labels
was used to evaluate the proposed method. This corpus consists of recorded and transcribed dialogs of
approximately ten minutes, conducted between two speakers. For each dialog, the intimacy labels of
each of the two speakers to his/her dialog partner are annotated on a five-point scale. The statistics
of the corpus are as follows: the number of subjects who participated in the conversations is 19, the
number of conversations is 54, and the total number of utterances is 6,984. Hereafter, we refer to this
corpus as the “Japanese Intimacy Dialog Corpus” or “JID corpus” for short.</p>
        <p>The 54 dialogs in the JID corpus were divided into three subsets: a training set of 33 dialogs, a
validation set of 9, and a test set of 12. As mentioned in Section 3, the dialog model accepts the preceding
dialog context of the user and the system,  = {1, 1, · · · , , }, as input and generates the
subsequent response +1 as output. Hereafter, the pair of a dialog context and its corresponding
response, denoted by (, +1), will be referred to as an instance of response. One speaker in the
corpus was designated as the system and the other as the user to extract a dialog context and response.
The first  × 2 utterances and the next utterance in a dialog were extracted as (, +1). This procedure
was then repeated, with the utterance shifted one by one, to obtain multiple instances of responses.
Finally, 4,032, 921, and 1,284 instances of responses were obtained as the training, validation, and test
data, respectively.</p>
        <p>We also used this corpus to train an intimacy estimation model. Let  = {1, · · · , } be the user’s
utterance extracted from the dialog context  in an instance of response, and let  be the intimacy label
for the dialog. The intimacy label was designated as “low” when the corresponding score in the JID
corpus was 1 or 2, or “high” when the value was 3, 4, or 5. The intimacy estimation model,  (|), is
a binary classification model that takes  as input and estimates the intimacy label . The model was
trained using samples (, ) in the training and validation data and its performance was evaluated
using the test data.</p>
        <p>
          Style Corpus Two style corpora are required to train style language models and a style discrimination
model:  and . The KeiCO corpus [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] was used as . This corpus contains utterances
using various types of honorific expressions in Japanese. Besides,  was constructed by extracting
utterances from conversations between speakers who know each other in the BTSJ corpus [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. 
contains 10,007 utterances, while  contains 13,351 utterances.
        </p>
        <p>To train the polite and the casual style language model, ( ) and ( ), all utterances in 
and , respectively, were utilized. To train the style discrimination model  ′(| ), a total of
23,248 utterances were used, comprising 9,957 utterances in  and 13,301 utterances in . The
remaining 100 utterances (50 utterances each) were used to evaluate the style discrimination model.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Experimental Setting</title>
        <p>
          The following methods, including our proposed methods, were compared in the experiment.
• DialoGPT [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] is the dialog model based on GPT-2 [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ], which has been pre-trained using a
large amount of dialog data.
• S-GPT is a STYLEDGPT that always generates polite-style responses.
• S-GPT is a STYLEDGPT that always generates casual-style responses.
• Rule is a method to control the style by heuristics. A response is generated by S-GPT when
the intimacy estimation model identifies the user’s level of intimacy as low, and by S-GPT  when
it is high.
• Rule switches between S-GPT and S-GPT based on the ground-truth label of the user’s
intimacy.
• I-S-GPT is Intimacy-aware STYLEDGPT, our proposed method.
• I-S-GPT is our proposed method, in which the ground-truth intimacy label is used instead of
an estimate based on the intimacy estimation model.
        </p>
        <p>If the performance of the intimacy estimation model is inadequate, misclassification of the level
of intimacy may prevent the learning of the stylized dialog model. To verify the efectiveness of our
approach to control the style of response in terms of intimacy, I-S-GPT was also evaluated. It can
be regarded as an ideal system that always correctly estimate the user’s intimacy. In this method, in
Eq. (4) and (5), the probability of the level of intimacy was approximated by the five-point intimacy
score () in the JID corpus as (=low|) ≃ 1 − 5 and (=high|) ≃ 5 . The additional input
described in subsection 3.3 was also given by the ground-truth intimacy score, that is, &lt;l&gt; is added
when  is between 1–2, while &lt;h&gt; is added when  is between 3–5.</p>
        <p>A method using a Large Language Model (LLM) for style-controlled generation can be considered
as a baseline. However, when a prompt is provided to ChatGPT to guess the user’s level of intimacy
and respond in an appropriate style, the generated responses are almost always polite. Therefore,
prompting-based LLM is not included in this experiment.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Implementation Details</title>
        <p>
          Style Language Model and Discrimination Model The style language models ( ) and ( )
were obtained by fine-tuning GPT-2. The architecture of the style language model consists of an
embedding layer, a transformer module, and a decoding layer of GPT-2. The pre-trained model was
japanese-gpt2-medium1, which had been trained on a large-scale Japanese dialog dataset. The learning
rate was set to 5e−4 , the batch size to 4, and the epoch to 20. The Adam optimizer [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] was used to
ifne-tune the model.
        </p>
        <p>
          The style discrimination model  ′(| ) was also obtained by fine-tuning the GPT-2 model. The
architecture of the style discrimination model consists of an embedding layer, a transformer module,
and a classification layer of GPT-2. The same pre-trained model used to train the style language model
was fine-tuned using the Adam optimizer with the same hyperparameters. The style discrimination
model was evaluated using the 100 utterances not used for training. Its accuracy was 64%.
Intimacy Estimation Model Bidirectional Encoder Representations from Transformers (BERT) [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]
was used to train the intimacy estimation model. The BERT base Japanese2, which had been trained
on Japanese Wikipedia and Japanese CC-100, was used as a pre-trained model. This BERT model was
ifne-tuned using the JID corpus. As for the hyperparameters, the learning rate was set to 5e−6 , the
batch size to 1, and the epoch to 10. The Adam optimizer was used to fine-tune the model. The accuracy
of the intimacy estimation model on the test set was 69%.
        </p>
        <p>The low accuracy indicates that intimacy estimation is a dificult task. Our error analysis shows that
there are few indicative words that are highly related to the speaker’s intimacy. For example, in the
sentiment analysis task, “pleasant” and “happy” are indicative words for positive emotions, and “sad”
and “unhappy” are ones for negative emotions. However, such indicative words are rare in the intimacy
estimation task. Another possible reason for the poor performance is the lack of training data. One of
the possible directions is to apply semi-supervised learning to compensate for small amounts of labeled
data with large amounts of unlabeled data.</p>
        <p>Dialog Model The dialog model described in subsection 4.2 was obtained by fine-tuning GPT-2. The
same pre-trained model1 used for training the style language models was utilized for fine-tuning the
dialog model. As for the hyperparameters, the learning rate was set to 1e−18 , the batch size to 1, and
the epoch to 10. The Adam optimizer was used to fine-tune the model.</p>
        <p>
          The parameters  ,  , and   in Eq. (6) were set to 0.45, 0.45, and 0.1, respectively. These values
were optimized on the validation data according to the StyCor criterion, which will be described in §4.4.
As for the sampling-and-rank decoding strategy, the hyperparameters were set to the same values as
those used in STYLEDGPT [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], specifically  to 40,  to 50, and  in Eq. (9) to 0.5.
        </p>
        <p>The length of a dialog context  = {1, 1, · · · , , } was set to 8, i.e., the parameter  was set
to 4. In the preliminary experiment to evaluate the intimacy estimation model, the accuracy of the
model was measured for diferent values of . The highest accuracy was obtained when  = 4.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Evaluation Criteria</title>
        <p>Both automatic and human evaluations were carried out to access responses generated by various
methods.</p>
        <p>
          Automatic Evaluation In automatic evaluation, the quality of the generated responses was evaluated
from three perspectives: relevance, diversity, and style. The relevance was measured by BLEU [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] and
ROUGE [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. Specifically, the similarity between a generated response and a ground-truth response
was evaluated using BLEU-1, BLEU-2, ROUGE-1, ROUGE-2, and ROUGE-L. The diversity was measured
by Distinct-1 (Dist-1) and Distinct-2 (Dist-2), following the experiment by Li et al. [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The style was
evaluated using “Style Correlation” (StyCor). The StyCor metric is defined as the correlation between
the probability of the casual style (=casual| ) and the ground-truth level of the intimacy3. This
correlation is high when both the predicted probability of the casual style and the intimacy level are
high, or both are low (i.e., the probability of the polite style is high and the intimacy is low). It evaluates
the extent to which the dialog model can control the style so that it generates a response in the casual
(or polite) style when the user’s level of intimacy is high (or low).
        </p>
        <p>Human Evaluation The quality of the generated responses was evaluated by human subjects. A
hundred instances of responses were randomly chosen from the test set in the JID corpus. For each
instance, responses were generated using the methods described in subsection 4.2 against the dialog
context . The responses were then evaluated by the subjects according to the following three criteria:
• Style Control: Does the response align with the appropriate style for the relationship between the
two speakers? Annotators are also instructed to read the dialog context and guess the relationship
between the speakers.
• Relevance: Is the content of the response relevant and consistent with the context?
• Fluency: Is the response natural, fluent, and free of grammatical errors?</p>
        <p>
          The quality of responses was evaluated by assigning a score of 3 (appropriate), 2 (neutral), or 1
(inappropriate) for each of the three perspectives. Ten native Japanese speakers participated in the
human evaluation. The inter-annotator agreement was measured using Fleiss’s kappa [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <sec id="sec-5-1">
        <title>5.1. Results of Automatic Evaluation</title>
        <p>Table 1 shows the results of the automatic evaluation. The bold indicates the best system for each
criterion. The StyCor of our proposed method using ground-truth intimacy labels, I-S-GPT, was
0.366. It significantly outperformed the other baseline methods. Especially, the StyCor of I-S-GPT 
was much better than that of the rule-based method, Rule, which naively altered the polite and
3The five-scale score is normalized to values between 0 and 1.
estimation model. It was also supported by the large diference between I-S-GPT  and I-S-GPT.
Our proposed method is highly dependent on the performance of the intimacy estimation model.</p>
        <p>As for the relevance, S-GPT achieved the best BLEU, while DialogGPT achieved the best ROUGE.
Our methods I-S-GPT and I-S-GPT were slightly worse for BLEU and obviously worse for
ROUGE than the best system, but comparable to other baselines. As for the diversity, no significant
diference of Dist-1 and Dist-2 was observed between the methods. From these results, it was found
that the outstanding ability of the pre-trained dialog model (DialoGPT) to produce relevant and diverse
responses was not remarkably damaged by incorporating the techniques of style control. Besides, no
significant diference was found in relevance and diversity between I-S-GPT
 and I-S-GPT.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Results of Human Evaluation</title>
        <p>The automatic evaluation revealed that the StyCor scores of the methods that automatically estimated
the level of intimacy (I-S-GPT and Rule) were insuficiently high. These two methods were
excluded from the human evaluation process to reduce the burden on the annotators.
assigned by the ten annotators. The “ ” column represents Fleiss’s  , which indicates the agreement of
scores between annotators. We also used Welch’s test to verify whether there was a significant diference
in scores between I-S-GPT and other methods. The “” column shows the -value associated with
this statistical test.</p>
        <p>Style Control The proposed method, I-S-GPT, achieved the highest score for style control. The
-values indicated that I-S-GPT was significantly better than the other methods, except for Rule .
These results demonstrated that our proposed method was capable of generating responses in a more
appropriate style. Rule was the second-best method, and both I-S-GPT and Rule were
designed to control the style according to the level of intimacy. This confirms the validity of our
approach to consider the user’s level of intimacy to use polite and casual styles appropriately. However,
the  for style control was 0.13, indicating that the inter-annotator agreement was relatively low.
Relevance Although I-S-GPT was worse than the other methods in the automatic evaluation of
relevance (as shown in Table 2), it achieved the highest score for relevance in the human evaluation.
Nevertheless, no significant diference was observed. At least, the ability of the proposed method to
generate responses relevant to the dialog context was comparable to that of the other baselines.
Fluency As with the relevance score, the average score for fluency was the highest for the proposed
method. However, a significant diference was only found between DialoGPT and I-S-GPT
. The 
for fluency was higher than that for style control and relevance, indicating that the annotators exhibited
greater consistency in evaluating the fluency of the responses.
Computational Time Table 3 shows a comparison of the average time required for response generation
per utterance across all test samples. A server with NVIDIA RTX A6000 48GB is used for the time
measurements. DialoGPT exhibited the shortest generation time, followed by S-GPT, I-S-GPT, and Rule.
S-GPT takes more time than DialoGPT due to the additional sampling and ranking strategy. In addition,
I-S-GPT and Rule are slower than S-GPT because they require additional processing for the intimacy
estimation.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>This paper proposed a novel method of controlling the speech style of a dialog system according to the
user’s level of intimacy with the dialog system. Based on the PLM, which was a good dialog model, two
loss functions were proposed to fine-tune it to generate responses in an appropriate style. In addition,
the special token indicating the user’s level of intimacy was added to the input of the dialog model. The
results of automatic and human evaluations demonstrated that our proposed method outperformed the
baseline for style control, indicating that the method could generate responses in a polite style when
intimacy was low and a casual style when intimacy was high.</p>
      <p>In the experiments, the accuracy of the intimacy estimation model was low, which caused a
considerable decrease in the performance of the dialog model that used this intimacy estimation model. In the
future, by improving the intimacy estimation model, we will enhance the style control ability of the
dialog system under conditions where the ground-truth intimacy labels are not used.</p>
      <p>It is our position that the study will not give rise to any significant ethical concerns. Our approach only
controls speech styles according to the internal state of a user, and it does not introduce or exacerbate
any ethical or social bias in a dialog system.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Khatri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Venkatesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hedayatnia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gabriel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <article-title>Alexa prize - state of the art in conversational AI</article-title>
          ,
          <source>AI</source>
          Magazine
          <volume>39</volume>
          (
          <year>2018</year>
          )
          <fpage>40</fpage>
          -
          <lpage>55</lpage>
          . URL: https://ojs.aaai.org/aimagazine/index. php/aimagazine/article/view/2810. doi:
          <volume>10</volume>
          .1609/aimag.v39i3.
          <fpage>2810</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Higashinaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Funakoshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Inaba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tsunomori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Takahashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Akama</surname>
          </string-name>
          ,
          <source>Dialogue System Live Competition: Identifying Problems with Dialogue Systems Through Live Event</source>
          , Springer Singapore,
          <year>2021</year>
          , pp.
          <fpage>185</fpage>
          -
          <lpage>199</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Dinan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Logacheva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malykh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shuster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Urbanek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kiela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Szlam</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Serban</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lowe</surname>
          </string-name>
          , et al.,
          <article-title>The second conversational intelligence challenge (convai2)</article-title>
          ,
          <source>in: The NeurIPS'18 Competition: From Machine Learning to Intelligent Conversations</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>187</fpage>
          -
          <lpage>208</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Khatri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Venkatesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gabriel</surname>
          </string-name>
          , Q. Liu,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nunn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hedayatnia</surname>
          </string-name>
          , M. Cheng, A.
          <string-name>
            <surname>Nagar</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Conversational</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>The science behind the alexa prize</article-title>
          , arXiv preprint arXiv:
          <year>1801</year>
          .
          <volume>03604</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Wardhaugh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Fuller</surname>
          </string-name>
          ,
          <article-title>An introduction to sociolinguistics</article-title>
          , John Wiley &amp; Sons,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Hovy</surname>
          </string-name>
          ,
          <article-title>Generating natural language under pragmatic constraints</article-title>
          ,
          <source>Journal of Pragmatics</source>
          <volume>11</volume>
          (
          <year>1987</year>
          )
          <fpage>689</fpage>
          -
          <lpage>719</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/0378216687901093. doi:https: //doi.org/10.1016/
          <fpage>0378</fpage>
          -
          <lpage>2166</lpage>
          (
          <issue>87</issue>
          )
          <fpage>90109</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Silverstein</surname>
          </string-name>
          ,
          <article-title>Indexical order and the dialectics of social life</article-title>
          ,
          <source>Language &amp; Communication</source>
          <volume>23</volume>
          (
          <year>2003</year>
          )
          <fpage>193</fpage>
          -
          <lpage>229</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0271-
          <volume>5309</volume>
          (
          <issue>03</issue>
          )
          <fpage>00013</fpage>
          -
          <lpage>2</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Aapakallio</surname>
          </string-name>
          ,
          <string-name>
            <surname>Understanding Through</surname>
          </string-name>
          Politeness - Translations of Japanese Honorific Speech to Finnish and English, University of Eastern Finland,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kobayashi</surname>
          </string-name>
          ,
          <article-title>Construction and validation of a Japanese honorific corpus based on systemic functional linguistics</article-title>
          , in: J.
          <string-name>
            <surname>Sälevä</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          Lignos (Eds.),
          <source>Proceedings of the Workshop on Dataset Creation for Lower-Resourced Languages within the 13th Language Resources and Evaluation Conference</source>
          , European Language Resources Association, Marseille, France,
          <year>2022</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>26</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .dclrl-
          <volume>1</volume>
          .3.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kageyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chiba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ito</surname>
          </string-name>
          ,
          <article-title>Improving user impression in spoken dialog system with gradual speech form control</article-title>
          ,
          <source>in: Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue</source>
          , Association for Computational Linguistics, Melbourne, Australia,
          <year>2018</year>
          , pp.
          <fpage>235</fpage>
          -
          <lpage>240</lpage>
          . URL: https://aclanthology.org/W18-5026. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>W18</fpage>
          -5026.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Niu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bansal</surname>
          </string-name>
          ,
          <article-title>Polite dialogue generation without parallel data, Transactions of the Association for Computational Linguistics 6 (</article-title>
          <year>2018</year>
          )
          <fpage>373</fpage>
          -
          <lpage>389</lpage>
          . URL: https://aclanthology.org/Q18-1027.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Galley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Brockett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dolan</surname>
          </string-name>
          ,
          <article-title>Structuring latent spaces for stylized response generation</article-title>
          , in: K. Inui,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          Wan (Eds.),
          <source>Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Hong Kong, China,
          <year>2019</year>
          , pp.
          <fpage>1814</fpage>
          -
          <lpage>1823</lpage>
          . URL: https://aclanthology.org/D19-1190. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D19</fpage>
          -1190.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T. Liu,
          <string-name>
            <given-names>W. Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Neural stylistic response generation with disentangled latent variables</article-title>
          ,
          <source>in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing</source>
          (Volume
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics"</source>
          , Bangkok, Thailand,
          <year>2021</year>
          , pp.
          <fpage>4391</fpage>
          -
          <lpage>4401</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Stylized dialogue response generation using stylized unpaired texts</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , AAAI Press, Online,
          <year>2021</year>
          , pp.
          <fpage>14558</fpage>
          -
          <lpage>14567</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Oraby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Perera</surname>
          </string-name>
          , J.-Y. Kao,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Narayan-Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          Hakkani-Tur,
          <article-title>Style control for schema-guided natural language generation</article-title>
          , in: A.
          <string-name>
            <surname>Papangelis</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Budzianowski</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Nouri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rastogi</surname>
            ,
            <given-names>Y.-N.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2021</year>
          , pp.
          <fpage>228</fpage>
          -
          <lpage>242</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .nlp4convai-
          <fpage>1</fpage>
          .21. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          . nlp4convai-
          <fpage>1</fpage>
          .
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Das</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Srihari</surname>
          </string-name>
          ,
          <article-title>Stylistic response generation by controlling personality traits and intent</article-title>
          , in: B.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Papangelis</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ultes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rastogi</surname>
            ,
            <given-names>Y.-N.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Spithourakis</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Nouri</surname>
          </string-name>
          , W. Shi (Eds.),
          <source>Proceedings of the 4th Workshop on NLP for Conversational AI</source>
          , Association for Computational Linguistics, Dublin, Ireland,
          <year>2022</year>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>211</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .nlp4convai-
          <fpage>1</fpage>
          .16. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2022</year>
          .nlp4convai-
          <fpage>1</fpage>
          .
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Miao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Geng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <article-title>Stylized knowledgegrounded dialogue generation via disentangled template rewriting, in: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</article-title>
          , Seattle, Washington, USA,
          <year>2022</year>
          , pp.
          <fpage>3304</fpage>
          -
          <lpage>3318</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2022</year>
          .naacl-main.
          <volume>241</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <article-title>Stylized dialogue generation with feature-guided knowledge augmentation, in: Findings of the Association for Computational Linguistics: EMNLP 2023, Association for Computational Linguistics</article-title>
          , Sentosa Gateway, Singapore,
          <year>2023</year>
          , pp.
          <fpage>7144</fpage>
          -
          <lpage>7157</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2023</year>
          .findings-emnlp.
          <volume>475</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>StyleDGPT: Stylized response generation with pre-trained language models</article-title>
          , in: T. Cohn,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          , Y. Liu (Eds.),
          <source>Findings of the Association for Computational Linguistics: EMNLP</source>
          <year>2020</year>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>1548</fpage>
          -
          <lpage>1559</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .findings-emnlp.
          <volume>140</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .findings-emnlp.
          <volume>140</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Skowron</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Theunis</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Sienkiewicz,</surname>
          </string-name>
          <article-title>The good, the bad and the neutral: afective profile in dialog system-user communication</article-title>
          ,
          <source>in: Proceedings of the 4th International Conference on Afective Computing and Intelligent</source>
          Interaction - Volume
          <string-name>
            <surname>Part</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <source>ACII'11</source>
          , Springer-Verlag, Berlin, Heidelberg,
          <year>2011</year>
          , p.
          <fpage>337</fpage>
          -
          <lpage>346</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>D'mello, A. Graesser, Autotutor and afective autotutor: Learning by talking with cognitively and emotionally intelligent computers that talk back</article-title>
          ,
          <source>ACM Trans. Interact. Intell. Syst</source>
          .
          <volume>2</volume>
          (
          <year>2013</year>
          ). URL: https://doi.org/10.1145/2395123.2395128. doi:
          <volume>10</volume>
          .1145/2395123.2395128.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Sun,
          <string-name>
            <given-names>M.</given-names>
            <surname>Galley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Brockett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dolan</surname>
          </string-name>
          , DIALOGPT :
          <article-title>Large-scale generative pre-training for conversational response generation</article-title>
          , in: A.
          <string-name>
            <surname>Celikyilmaz</surname>
          </string-name>
          , T.-H. Wen (Eds.),
          <article-title>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>270</fpage>
          -
          <lpage>278</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .acl-demos.
          <volume>30</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl-demos.
          <volume>30</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          , et al.,
          <article-title>Language models are unsupervised multitask learners</article-title>
          ,
          <source>OpenAI blog 1</source>
          (
          <year>2019</year>
          )
          <article-title>9</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Usami</surname>
          </string-name>
          (Ed.),
          <article-title>BTSJ-Japanese Natural Conversation Corpus with Transcripts and Recordings (March</article-title>
          <year>2021</year>
          ),
          <article-title>National Institute for Japanese Language</article-title>
          and Linguistics, Japan,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Kingma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ba</surname>
          </string-name>
          ,
          <article-title>Adam: A method for stochastic optimization</article-title>
          ,
          <source>CoRR abs/1412</source>
          .6980 (
          <year>2014</year>
          ). URL: https://api.semanticscholar.org/CorpusID:6628106.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <source>in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <source>Association for Computational Linguistics</source>
          , Minneapolis, Minnesota,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . URL: https://aclanthology.org/N19-1423. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          -1423.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>K.</given-names>
            <surname>Papineni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roukos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ward</surname>
          </string-name>
          , W.-J. Zhu,
          <article-title>BLEU: a method for automatic evaluation of machine translation</article-title>
          ,
          <source>in: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics</source>
          , ACL '02,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computational Linguistics, USA,
          <year>2002</year>
          , p.
          <fpage>311</fpage>
          -
          <lpage>318</lpage>
          . URL: https://doi.org/10.3115/1073083.1073135. doi:
          <volume>10</volume>
          .3115/1073083.1073135.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>C.-Y. Lin</surname>
            ,
            <given-names>ROUGE:</given-names>
          </string-name>
          <article-title>A package for automatic evaluation of summaries, in: Text Summarization Branches Out, Association for Computational Linguistics</article-title>
          , Barcelona, Spain,
          <year>2004</year>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>81</lpage>
          . URL: https://aclanthology.org/W04-1013.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Fleiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Jacob</surname>
          </string-name>
          ,
          <article-title>The equivalence of weighted kappa and the intraclass correlation coeficient as measures of reliability</article-title>
          ,
          <source>Educational and Psychological Measurement</source>
          <volume>33</volume>
          (
          <year>1973</year>
          )
          <fpage>613</fpage>
          -
          <lpage>619</lpage>
          . URL: https://cir.nii.ac.jp/crid/1360855569674739072. doi:
          <volume>10</volume>
          .1177/001316447303300309.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>