<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Do We Trust What They Say, or What They Do? A Multimodal User Embedding Provides Personalized Explanations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zhicheng Ren</string-name>
          <email>franklinnwren@g.ucla.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhiping Xiao</string-name>
          <email>patxiao@uw.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yizhou Sun</string-name>
          <email>yzsun@cs.ucla.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Aurora Innovation, 280 N Bernardo Ave, Mountain View</institution>
          ,
          <addr-line>CA 94043</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>CIKM MMSR'24: 1st Workshop on Multimodal Search and Recommendations at 33rd ACM International Conference on Information and Knowledge Management</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of California, Los Angeles</institution>
          ,
          <addr-line>Los Angeles, CA 90095</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Washington</institution>
          ,
          <addr-line>1410 NE Campus Pkwy, Seattle, WA 98195</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>With the rapid development of social media, the importance of analyzing social network user data has also been put on the agenda. User representation learning in social media is a critical area of research, based on which we can conduct personalized content delivery, or detect malicious actors. Being more complicated than many other types of data, social network user data has an inherent multimodal nature. Various multimodal approaches have been proposed to harness both the text (i.e. post content) and the relation (i.e. inter-user interaction) information to learn user embeddings of higher quality. The advent of Graph Neural Network models enables more end-to-end integration of user text embeddings and user interaction graphs in social networks. However, most of those approaches do not adequately elucidate which aspects of the data - text or graph structure information - are more helpful for predicting each speci c user under a particular task, putting some burden on personalized downstream analysis and untrustworthy information ltering. We propose a simple yet e ective framework called Contribution-Aware Multimodal User Embedding (CAMUE) for social networks. We have demonstrated with empirical evidence, that our approach can provide personalized explainable predictions, automatically mitigating the impact of unreliable information. We also conducted case studies to show how reasonable our results are. We observe that for most users, graph structure information is more trustworthy than text information, but there are some reasonable cases where text helps more. Our work paves the way for more explainable, reliable, and e ective social media user embedding, which allows for better-personalized content delivery.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi-modal representation learning</kwd>
        <kwd>Social network analysis</kwd>
        <kwd>User embeddings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The advancement of social networks has placed the analysis and study of social network data at the
forefront of priorities. User-representation learning is a powerful tool to solve many critical problems in
social media studies. Reasonable user representations in vector space could help build a recommendation
system [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], conduct social analysis [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ], detect bot accounts [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6, 7, 8</xref>
        ], and so on. To obtain
userembeddings of higher quality, many multimodal methods are proposed to fully utilize all types of
available information from the social networks, including interactive graphs, user pro les, images, and
texts from their posts [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9, 10, 11, 12</xref>
        ]. Compared with models using single modality data, multimodal
methods utilize more information from social media platforms. Hence they usually achieve better
results in downstream tasks.
      </p>
      <p>
        Among all modalities in social networks, user-interactive graphs (i.e., what they do) and text content
(i.e., what they say) are the two most frequently used options, due to their good availability across
di erent datasets and large amounts of observations. The graph-neural-network (GNN) models [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13,
14, 15</xref>
        ] makes it more convenient to fuse both the text information and graph-structure information of
social-network users, where text-embeddings from language-models such as GloVe [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] or BERT [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] are
usually directly incorporated into GNNs as node attributes. Although those approaches have achieved
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
      </p>
      <p>From the
graph
information,
Elon Musk is
most likely a</p>
      <p>Republican!
Elon’s tweet
keywords:
- Silicon valley
- Hollywood
- Tech
- Clean energy
- …...</p>
      <p>retweet
Ivanka
Trump
From the text
information,
Elon Musk
could be a
Democrat!</p>
      <p>follow
Elon
Musk
mention</p>
      <p>
        Kevin
McCarthy
Bernie
Sanders
great performance in a bunch of downstream tasks [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], the text information and graph-structure
information are fully entangled with each other, which makes it hard to illustrate the two modalities’
respective contributions to learning each user’s representation.
      </p>
      <p>
        Researchers have already found that di erent groups of users can behave very di erently on social
media [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. If such di erences are not correctly captured, it might cause signi cant bias in the user
attribute prediction (e.g., political stance prediction) [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Hence, when learning multi-modal user
representation for di erent users, it is not only important to ask what the prediction results are, but
also important to ask why we are making such predictions (e.g. Are those predictions due to the same
reason?). Only in that way, we could provide more insights into the user modelings, and potentially
enable unbiased and personalized downstream analysis for di erent user groups.
      </p>
      <p>
        On the other hand, under a multi-modality setting, if one aspect of a user’s data is not trustworthy and
misleading, it might still be fused into the model and make the performance lower than single-modality
models [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Consider the case when we want to make a political ideology prediction for Elon Musk
based on his Twitter content before the 2020 U.S. presidential election (Figure 1), when he has not
revealed his clear Republican political stance yet. If we trust the follower-followee graph structure
information, we can see that he is likely to be a Republican, since he follows more Republicans than
Democrats, and has more frequent interactions with the veri ed Republicans accounts. However, in
his tweet content, his word choice also shows some Democratic traits. Due to the existence of such
con icting information, being able to automatically identify which modality is more trustworthy for
each individual becomes essential in building an accurate social media user embedding for di erent
groups of users.
      </p>
      <p>To address the above two shortcomings of text-graph fusion in social networks, we propose a simple
yet e ective framework called Contribution-Aware Multimodal User Embedding (CAMUE), which can
identify and remove misleading modality from speci c social network users during text-graph fusion,
in an explainable way. CAMUE uses a learnable attention module to decide whether we should trust
the text information or the graph structure information when predicting individual user attributes,
such as political stance. Then, the framework outputs a clear contribution map for each modality on
each user, allowing personalized explanations for downstream analysis and recommendations. For
ambiguous users whose text and graph structure information disagree, our framework could successfully
mitigate unreliable information among di erent modalities by automatically adjusting the weight of
that information accordingly.</p>
      <p>
        We conduct experiments on the TIMME dataset [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] used for a Twitter political ideology prediction
task. We observed that our contribution map can give us some interesting new insights. A quantitative
analysis of di erent Twitter user sub-groups shows that link information (i.e., interaction graph)
contributes more than text information for most users. This provides insights that political advertising
agencies should gather more interaction graph information of Twitter users in the future when creating
personalized advertisement content, instead of relying too much on their text data. We also observe
that when the graph and text backbone are set to R-GCN and GloVe respectively, our approach ignores
the unreliable GloVe embedding and achieves better prediction results. When the text modality is
switched to a more accurate BERT embedding, our framework can assign graph/text weights for di erent
users accordingly and achieve comparable performance to existing R-GCN-based fusion methods. We
pick 9 celebrities among the 50 most-followed Twitter accounts 1, such as Elon Musk. A detailed
qualitative analysis of their speci c Twitter behaviors shows that our contribution map models their
online behaviors well. Finally, we run experiments on the TwiBot-20-Sub dataset [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] used for a Twitter
human/bot classi cation task, showing that our framework could be generalized to other user attribute
prediction tasks. By creating social media user embeddings that are more explainable, reliable, and
e ective, our framework enables improved customized content delivery.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries and Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Multimodal Social Network User Embedding</title>
        <p>
          Social network user embedding is a popular research eld that aims to build accurate user
representations. A desirable user embedding model should accurately map sparse user-related features
in high-dimensional spaces to dense representations in low-dimensional spaces. Multimodal social
network user embedding models utilize user di erent types of user data to boost their performance.
Commonly-seen modality combinations include graph-structure (i.e. link) data and text data [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ],
graph-structure data and tabular data [
          <xref ref-type="bibr" rid="ref24 ref25 ref9">24, 25, 9</xref>
          ], and graph-structure data, text data and image data
altogether [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ], etc.
        </p>
        <p>
          Among those multi-modality methods, the fusion of graph-structure data and text data has always
been one of the mainstream approaches for user embedding. At an earlier stage, without much help
from the GNN models, most works trained the network-embedding and text-embedding separately and
fused them using a joint loss [
          <xref ref-type="bibr" rid="ref28 ref29 ref30 ref31">28, 29, 30, 31</xref>
          ]. With the help of the GNN models, a new type of fusion
method gained popularity, where the users’ text-embeddings are directly incorporated into GNNs as
node attributes [
          <xref ref-type="bibr" rid="ref23 ref32 ref33">23, 32, 33</xref>
          ].
        </p>
        <p>
          Despite their good performance, all existing models do not explain how much the graph structure
and the text information of speci c users contribute to the nal prediction results, making it di cult to
give customized modality weight for downstream analysis or recommendations. Also, if one modality
is poorly learned, it can be counter-e ective to the user embedding quality, making it even worse than
their single-modality counterparts [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. How to address this problem in a universally-learned way
instead of heuristic-based information ltering, has largely gone under-explored. Hence, we propose
a framework that not only utilizes both text and graph-structure information, but also reveals their
relative importance along with the prediction result.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Graph Neural Network</title>
        <p>Graph Neural Network (GNN) is a collection of deep learning models that learn node embedding through
iterative aggregation of information from neighboring nodes, using a convolutional operator. Most GNN
architectures include a graph convolution layer in a form that can be characterized as message-passing
and aggregation. A general formula for such convolution layers is:</p>
        <p>H(l) = (A˜ H(l 1)W(l)) ,
(1)
where H(l) represents the hidden node representation of all nodes at layer l, operator is a non-linear
activation function, and the graph-convolutional lter A˜ is a matrix that usually takes the form of a
transformed (e.g., normalized) adjacency matrix A, and the layer-l’s weight W(l) is learnable.</p>
        <p>
          In the past few years, GNN models have reached SOTA performances in various graph-related
tasks. They are widely regarded as promising techniques to generate node embedding for users in
social-network graphs. [
          <xref ref-type="bibr" rid="ref13 ref14 ref15 ref34">13, 34, 14, 15</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Neural Network-based Language Models</title>
        <p>
          The eld of natural language processing has undergone a signi cant transformation with the advent
of neural-network-based language models. Word2Vec [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ] introduced two architectures: Continuous
Bag-of-Words (CBOW) and Skip-Gram. CBOW predicts a target word given its context, while
SkipGram predicts context words given a target word. GloVe [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] model went beyond by incorporating
global corpus statistics into the learning process. ELMo [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ] was another signi cant step forward, as
it introduced context-dependent word representations, making it possible for the same word to have
di erent embeddings if the context is di erent. BERT [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] is a highly in uential model that is built
on the transformer architecture [
          <xref ref-type="bibr" rid="ref37">37</xref>
          ], pre-trained on large text corpora using, for example, masked
language modeling and next-sentence prediction tasks. Recently, large language models like GPT-3 [
          <xref ref-type="bibr" rid="ref38">38</xref>
          ],
InstructGPT [
          <xref ref-type="bibr" rid="ref39">39</xref>
          ], and ChatGPT have achieved signi cant breakthroughs in natural-language-generation
tasks. All those models are frequently used to generate text embedding for social network users.
        </p>
        <p>Our framework does not rely on any speci c language model, and we do not have to use LLMs.
Instead, we use language-models as a replaceable component, making it possible for both simpler ones
like GloVe and more complicated ones like BERT to t in. We will explore some di erent options in the
experimental section.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Multimodal Explanation Methods</title>
        <p>
          In the past, several methods have been proposed to improve the interpretability and explainability
of multimodal fusions [
          <xref ref-type="bibr" rid="ref40">40</xref>
          ]. Commonly used strategies include attention-based methods [
          <xref ref-type="bibr" rid="ref41 ref42 ref43">41, 42, 43</xref>
          ],
counterfactual-based methods [
          <xref ref-type="bibr" rid="ref44 ref45">44, 45</xref>
          ], scene graph-based methods [
          <xref ref-type="bibr" rid="ref46">46</xref>
          ] and knowledge graph-based
methods [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ]. Unfortunately, most of them focus on the fusion of image modality and text modality,
primarily the VQA task, while to the best of our knowledge, no work focuses on improving the
explainability between the network structure data and text data in social-network user embedding.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Definition</title>
      <p>Our general goal is to propose a social network user embedding fusion framework that could answer: 1.
which modality (i.e. text or graph structure, saying or doing) contributes more to our user attribute
prediction, hence allowing more customized downstream user behavior analysis and 2. which modality
should be given more trust for each user, and automatically lter out the untrustworthy information
when necessary, in order to achieve higher-quality multi-modal user-embedding.</p>
      <sec id="sec-3-1">
        <title>3.1. Problem Formulation</title>
        <p>A general framework of our problem could be formulated as follows: given a social media interaction
graph G = (V, E) with node set V representing users and edge set E representing links between users.
Let X = [x1, x2, x3, · · · , xn] be the text content of n = |V| users, Y = [y1, y2, y3, · · · , yn] be the labels
of those users, A = [A1, A2, · · · , Am] be the adjacency matrices of G, m be the number of link types
i
and A 2 Rn⇥ n, our training objective is:</p>
        <p>min E [L (f (G, X) , Y)]</p>
        <p>Here, L is the loss of our speci c downstream task, and f is some function that combines the graph
structure information and text information, producing a joint user embedding.
(2)
(3)
(4)</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Preliminary Experiment</title>
        <p>
          To investigate the e ectiveness of the existing GNN-based multimodal fusion methods in ltering
the unreliable modality when the graph structure and text contradict, we run experiments using a
common fusion method that feeds the ne-tuned BERT features into the R-GCN backbone, similar to
the approaches in [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] and [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. We observe that this conventional fusion method fails to lter the
unreliable information for some of those ambiguous users. Table 1 shows two politicians whose Twitter
data contains misleading information, either in the graph structure or text data. While the
singlemodality backbones which are trained without misleading information give the correct predictions, the
multi-modality fusion method is confused by the misleading information, hence it is not able to make
correct predictions.
        </p>
        <p>These insights revealed the importance of having a more exible and explainable framework for
learning multimodal user embedding.</p>
        <p>H(1) = (concat(A1 + A2 + · · · + Am, BERTemb (X) W(1))</p>
        <p>H(2) = (H(1)W(2))</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>We propose a framework of Contribution-Aware Multimodal User Embedding (CAMUE), a fusion
method for text data and graph structure data when learning user embedding in social networks. The
key ingredient of this framework is an attention gate-based selection module which is learned together
with the link and text data, and decide which information we want to trust more for each particular
user.</p>
      <p>Our framework has three main parts: a text encoder, a graph encoder, and an attention-gate learner.
The text content of each user passes through the text encoder and generates a text embedding for that
user. The embedding is then passed through a three-layer MLP for ne-tuning. The adjacency matrix of
the users passes through the graph encoder and generates a node embedding for that user. At the same
time, both the text embedding and the graph adjacency matrix pass through our attention gate learner.
The output of this module is two attention weights, ↵ and , which control the proportion of our graph
structure information and text information. Without loss of generality, if we make R-GCN our graph
encoder and BERT our text encoder, our model will be trained in the following way (Equation 3-6, also
illustrated in Figure 2):</p>
      <p>Where H and W are hidden layers and weights of our attention gate learner, X = [x1, x2, x3, · · · , xn]
is the text content, BERTemb is the BERT encoding module, A = [A1, A2, · · · , Am] is the adjacency
matrices of G and m is the number of link types.</p>
      <p>Then, our overall training objective becomes:
min E[L((↵ + ) R-GCNemb(G)
+ (</p>
      <p>+ ) BERTemb(X), Y)]
Here, acts as a regularizer to ensure our model is not overly dependent on a single modality.</p>
      <p>Our methods o er two levels of separation. First, we separate the text encoder and the graph encoder
to allow better disentanglement on which data contributes more to our nal prediction results. Second,
we separate the learning of the downstream tasks and the learning of which data modality (i.e. text or
graph structure) we can rely on more. This makes our framework adaptable to di erent downstream
social media user prediction tasks. The learned trustworthiness of di erent modalities allows for
autoadjustment of the weight between graph structure and text modalities, hence ltering any unreliable
information once they are discovered.</p>
      <p>Figure 2 shows the overall architecture of our framework, note that the graph structure encoder and
text encoder could be replaced by any other models that serve the same purposes.</p>
      <p>Text Content</p>
      <p>Tweet Data
Graph Structure
Text Content</p>
      <p>Tweet Data
Graph Structure
Transformer Encoder</p>
      <p>Text Embedding
……
……
…</p>
      <p>GNN Model</p>
      <p>with</p>
      <p>Relation Info
…
Attention Gate</p>
      <p>Learner</p>
      <p>Multi-Layer
Perceptron
GNN Model</p>
      <p>with
Relation Info
loss
loss</p>
      <p>
        We give a short complexity analysis of our architecture for the case of R-GCN + BERT: Since we are
using sparse adjacency matrix for R-GCN, the graph encoder part has a complexity of O(LgraphEFgraph +
LgraphN Fg2raph) (according to [
        <xref ref-type="bibr" rid="ref48">48</xref>
        ]), where L is the number of layers, E is the number of edges, N is
the number of nodes, and F is the feature dimension. Since we xed the maximum text length to
be a constant for the text encoder, it has a complexity of O(Ft2ext) (based on [
        <xref ref-type="bibr" rid="ref49">49</xref>
        ]). Since Ftext and
Fgraph are about comparable size, our fusion module has the complexity of O(Fg2raph + Ft2ext), so the
overall complexity is O(LgraphEFgraph + LgraphN Fg2raph + Ft2ext), hence we are not adding extra time
complexity.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments</title>
      <sec id="sec-5-1">
        <title>5.1. Tasks and Datasets</title>
        <p>
          We run experiments on two Twitter user prediction tasks: 1. Predicting the political ideology of Twitter
users (Democrat vs Republican) and 2. Predicting whether a Twitter user account is a human or a bot.
5.1.1. TIMME
TIMME [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] introduced a multi-modality Twitter user dataset as a benchmark of political ideology
prediction task for Twitter users. TIMME contains 21, 015 Twitter users and 6, 496, 112 Twitter
interaction links. Those links include follows, retweets, replies, mentions, and likes. Together they form a
large heterogeneous social network graph. TIMME also contains 6, 996, 310 raw Twitter content from
those users. Hence, it will be a good dataset to study di erent fusion methods of text features and graph
structure features. In TIMME, there are 586 labeled politicians and 2, 976 randomly sampled users with
a known political a liation. Some of them are ambiguous users we investigated before. Labeled nodes
belong to either Democrats or Republicans. Note that the dataset cut-o time is 2020, so the political
polarities of many public gures (e.g. Elon Musk) have not been reviewed at that time.
5.1.2. TwiBot-20-Sub
TwiBot-20 [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] is an extensive benchmark for Twitter bot detection, comprising 229, 573 Twitter
accounts, of which 11, 826 are labeled as human users or bots. The dataset also contains 33, 716, 171
Twitter interaction links and 33, 488, 192 raw Twitter content. The links in TwiBot-20 include follows,
retweets, and mentions. To further examine the generalizability of our method, we run experiments for
Twitter bot account detection on the TwiBot-20 dataset. To reduce the computation cost of generating
node features and text features, we randomly subsample 3, 000 labeled users and 27, 000 unlabeled
users from the TwiBot-20 dataset, and form a new dataset called TwiBot-20-Sub. In this way, the size
and label sparsity of the TwiBot-20-Sub dataset becomes comparable with the TIMME dataset.
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>5.1.3. Train-test Split</title>
        <p>We split the users of both datasets into an 80%:10%:10% ratio for the training set, validation set, and
test set respectively.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.2. Implementation Detail</title>
        <p>To test the e ectiveness of our framework across di erent models, we choose two single-modality text
encoders, GloVe and BERT, and two single-modality graph encoders, MLP and R-GCN.</p>
        <p>
          The GloVe embedding refers to the Wikipedia 2014 + Gigaword 5 (300d) pre-trained version. 2 The
BERT embedding refers to the sentence level ([CLS] token) embedding of BERT-base model [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ] after
ne-tuning the pre-trained model’s parameters on the tweets from our training set consisting of 80%
of the users. We chose a max sequence length of 32. After the encoding, we have a 300-dimension text
embedding for GloVe and a 768-dimension text embedding for BERT.
        </p>
        <p>
          We choose a modi ed version of R-GCN from TIMME [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] as an R-GCN graph encoder. R-GCN
[
          <xref ref-type="bibr" rid="ref34">34</xref>
          ] is a GNN model speci cally designed for heterogeneous graphs with multiple relations. In the
TIMME paper, it is discovered that assigning di erent attention weights to the relation heads of the
R-GCN model could improve its performance. Hence, we adopt their idea and use the modi ed version
of R-GCN. We did not use the complete TIMME model since it is designed for multiple tasks outside
our research scope, and will overly complicate our model.
        </p>
        <p>We also choose a 3-layer MLP as another graph encoder for comparison, the adjacency list for each
user is passed to the MLP.</p>
        <p>
          Large language models (LLMs) like ChatGPT are powerful in understanding texts, but they usually
have a great number of parameters, making traditional supervised ne-tuning a hard and costly task
[
          <xref ref-type="bibr" rid="ref38">38</xref>
          ]. Instead, less resource-intensive methods like few-shot learning, prompt tuning, instruction tuning,
and chain-of-thought are more frequently used to adapt LLMs on speci c tasks [
          <xref ref-type="bibr" rid="ref51">51</xref>
          ]. We do not use
large language models as one of the options for the text encoder since those methods are not compatible
with our framework – they do not provide a well-de ned gradient to train our attention gate learner.
        </p>
        <p>We run experiments on a single NVIDIA Tesla A100 GPU. We used the same set of hyper-parameters
as in the TIMME paper, with the learning rate being 0.01, the number of GCN hidden units being 100,
and the dropout rate being 0.1, on a PyTorch platform. For a fair comparison, we run over 10 random
seeds for each algorithm on each task.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Results and Analysis</title>
      <sec id="sec-6-1">
        <title>6.1. Contribution Map</title>
        <p>To show that our framework could e ectively provide personalized explanations during the fuse of
modalities, we draw the contribution map based on ↵ (graph weight) and (text weight) attention for
users from each dataset. The darker the color, the weight of the corresponding modality is closer to 1.
2glove.6B.zip from https://nlp.stanford.edu/projects/glove/
In the contribution map, pure white indicates a zero contribution (0) from a modality, while pure dark
blue indicates a full contribution (1).</p>
        <p>The top gure of Figure 3 shows the contribution map output when the text encoder is BERT and the
graph encoder is R-GCN, on a subgroup of the TIMME dataset consisting of some politicians and some
random Twitter users. As we can see, there is a clear cut between the percentage of contributions from
di erent modalities to the nal prediction. It is notable that for the two ambiguous politician users we
have mentioned earlier (Ryan Costello and Sheldon Whitehouse), CAMUE could give correct attention,
where we should trust more text data from Mr. Costello while trusting more graph structure data from
Mr. Whitehouse. To avoid any misuse of personal data information, we hide the names of random
Twitter users and only include politicians whose Twitter accounts are publically available at 3.</p>
        <p>The bottom gure of Figure 3 shows the contribution map output when the text encoder is GloVe
and the graph encoder is R-GCN, on the same subgroup of the TIMME dataset. Note that for all shown
users text information does not contribute to the nal prediction. This could be attributed to the
fact that GloVe is not very powerful for sentence embedding, especially when the text is long. This
contribution map shows that our framework lters out the text modality almost completely when it is
not helpful for our user embedding learning. As we can see from table 2, the traditional fusion method
for GloVe+R-GCN only yields an accuracy of 0.840, which is much lower than the single graph structure
modality prediction (0.953) using R-GCN, due to unreliable GloVe embedding. In contrast, our CAMUE
method obtains a higher accuracy (0.954) than all single modality models, since it disregards unreliable
information.</p>
        <p>Figure 4 shows the contribution map output for the same set of encoders on a subgroup of the
Twibot-20-Sub dataset. There is also a clear cut between the percentage of contributions from di erent
modalities, for both the human Twitter accounts and bot accounts.</p>
        <p>Hence, we verify that our framework could both provide personalized modality contribution and
drop low-quality information during the fuse of modalities. Some quantitative analysis of how this
low-quality information ltering could bene t the general model performance could be found in the
next section, and some qualitative analysis about what new insights we could gain from the output of
our framework could be found in the case study section.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. General Performance</title>
        <p>Table 2 shows the performance of CAMUE on di erent combinations of encoders. The traditional
fusion method in Figure 2 is denoted as “simple fusion”. For MLP, we do not have such a natural fusion
method. We also add “CAMUE, xed params” as an ablation experiment to prove the e ectiveness of
our attention gate-based selection module.</p>
        <p>We observe that within those combinations, sometimes simple fusion methods are signi cantly worse
than single-modality methods (e.g. GloVe+R-GCN vs R-GCN only) due to some untrustworthiness in
one of the modalities. However, any fusion under our CAMUE framework always performs better
than the corresponding single modality methods. That suggests that our algorithm can bene t from
attending to the more reliable modality between text and graph structure, if one particular modality is
not trustworthy (e.g. GloVe embedding), and learning not to consider it when making predictions (as
we can see in Figure 3, bottom).</p>
        <p>It is also notable that our CAMUE method outperforms “CAMUE, xed params”. These results suggest
that adjusting the weight of di erent modalities dynamically yields better performance than xed
weights of modalities. Finally, when the text modality is switched to a more accurate BERT embedding,
our framework still gives comparable performance to its corresponding simple fusion methods.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Case Studies</title>
        <p>
          User Sub-groups Table 3 gives a quantitative analysis when the text encoder is BERT and the graph
encoder is R-GCN, for di erent sub-groups of Twitter users we are interested in. In general, graph
structure information contributes the most when comes to bot accounts. One possible explanation for
this is the variety of bot accounts on Twitter, such as those for business advertising, political outreach,
and sports marketing [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. Bots with di erent usage might talk very di erently, however, they may
share some common rule-based policies when interacting with humans on Twitter [
          <xref ref-type="bibr" rid="ref52 ref53">52, 53</xref>
          ].
        </p>
        <p>
          Graph structure information contributes the second highest when it comes to politicians. This is also
not surprising since politicians are generally more inclined to retweet or mention events related to their
political parties [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. It is also notable that the weight of text information for Republicans is slightly
less than that for Democrats. This aligns with the ndings in [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ] that Democrats have a slightly more
politically polarized word choice than Republicans.
        </p>
        <p>For random users, the weight of text information is the largest, although still not as large as the
weight of graph structure information. This could be attributed to the pattern that many random users
interact frequently with their non-celebrity families and friends on Twitter, who are more likely to be
politically neutral.</p>
        <p>
          Table 4 shows some predicted political stances and the main contributing modalities of a group of
news agencies. We can see that most of them have more reliable information about graph structure
than text information. This is not surprising since most news agency tends to use neutral words to
increase their credibility, hence it is hard to gather strong political stances from their text embedding,
except for some of them like Fox News and Guardian which are known to use political polarized terms
more often [
          <xref ref-type="bibr" rid="ref54 ref55">54, 55</xref>
          ]. Our framework is able to capture this unique behavior pattern for Fox News and
Guardian, meanwhile giving mostly accurate political polarity predictions aligning with results in [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ]
and 4.
        </p>
        <p>To conclude, we are able to obtain customized user behavior patterns through our multi-modality
fusion. Those patterns could provide insights on which modality we should focus on more for di erent
types of users, for downstream tasks such as personalized recommendations, social science analysis, or
malicious user detection.</p>
        <p>
          Selected Celebrities from TIMME Dataset Since there exists no ground truth contribution
4https://www.allsides.com/media-bias/ratings
of the two aspects of user pro les (text and graph structure) on their nal predictions, we do case study
by evaluating a subset of users qualitatively to validate our frameworks’ capability to give personalized
explanations. We selected 9 celebrities among the top 50 most followed Twitter accounts from 5, whose
Twitter accounts appear in the TIMME dataset, as we are not allowed to disclose regular Twitter users’
information. We obtain the political polarity predictions of those celebrities and record the percentage
of text/graph structure information that contributes to their political polarity predictions (See Figure 5).
• Elon Musk: Before 2020 (dataset cut-o ), Elon Musk’s political views in his tweet text content are
often complex. He claimed multiple times not to take the viewpoints in his tweets too seriously
6. This aligns with the low contribution weight of his texts on his political stand prediction.
However, on the graph level, 66.67% of the politicians Elon Musk liked more than one time, have
also liked Trump at least once. This is signi cantly larger than the average number in the TIMME
dataset (23.67%). This could be a strong reason why our graph structure weight is so high and
why we predict Elon Musk to be Republican-leaning. Our prediction is proved correct when in
2022 (which is beyond our dataset cut-o time, 2020 [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ]), Elon Musk claimed that he would vote
for Republicans in his tweet 7. This is a strong indicator that our framework is using correct
information.
• LeBron James: In his tweets, LeBron James frequently shows his love and respect to Democratic
President Obama 8. Our prediction for him to be Democrat-leaning with a strong text contribution
aligns with this observation.
• Lady Gaga: Similarly to James, Lady Gaga also expresses explicitly in her tweets about her support
of Democratic candidates 9. Our graph weight is 0, meaning that the text alone is su cient to
predict that she is Democrat-learning.
• Bill Gates: He usually avoids making explicit statements about whether he supports Democrats
or Republicans in his tweets. Although our model predicts him as the Republican, the probability
edge is very marginal (11%).
• Oprah Winfrey: During the 2016 presidential campaign, she retweeted and mentioned her support
for Democratic candidate Hillary Clinton frequently 10, making the graph structure information
5https://socialblade.com/twitter/top/100
6https://twitter.com/elonmusk/status/1007780580396683267
7https://twitter.com/elonmusk/status/1526997132858822658
8https://twitter.com/KingJames/status/1290774046964101123, https://twitter.com/KingJames/status/1531837452591042561
9https://twitter.com/ladygaga/status/1325120729130528768
10https://twitter.com/Oprah/status/780588770726993920
        </p>
        <p>a strong indicator of her Democratic stance.
• Jimmy Fallon: Jimmy Fallon has managed to maintain a sense of political neutrality in his
tweets. His text contribution to the nal prediction is 0. Even though the Twitter graph structure
indicates that he is Democrat-leaning, we still do not know in real life whether he is a Democrat
or Republican.
• Katy Perry: Just like Oprah Winfrey, Katy Perry also interacted with and supported Hillary
Clinton during the 2016 election, a reason why we predict her as Democrat-leaning from the
graph structure. Although she supports some republican politicians in 2022 11, that is beyond the
dataset cuto .
• Justin Timberlake: Justin Timberlake has frequent positive interactions with President Obama 12
and rmly supports Hillary Clinton in his tweets 13, both suggesting that he is Democrat-learning.
Our model assigns a similar weight to text and graph structure, suggesting that both contribute
to that prediction equally.
• Taylor Swift: In the case of Taylor Swift, the model fails to give the correct prediction. Her tweets
show that she voted for Biden during 2020 14, but the prediction is Republican. One reason is that
at the graph structure level, the majority of Taylor Swift’s followers are classi ed as Republican
(67.09%) in the dataset, which can mislead the graph encoder.</p>
        <p>Overall, we conclude that graph structure information is usually more useful when predicting the
political polarities of those celebrities. That aligns with the quantitative results in table 3. As we can
see, di erent celebrities may have very di erent behavior patterns. Those patterns can be correctly
captured and explained by our contribution weight. That con rms the e ectiveness of our framework.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>In this paper, we investigate some potential limitations of existing fusion methods for text information
and graph structure information in user representation learning on social networks. We then propose
a contribution-aware multimodal social-media user-embedding with a learnable attention module.
Our framework can automatically determine the reliability of text and graph-structure information
when learning user-embeddings. It lters out unreliable modalities for speci c users across various
downstream tasks. Since our framework is not bound to any speci c model, it has great potential to be
adapted to any graph-structure-embedding component and text-embedding component, if a ordable.
More importantly, our models can give a score on the reliability of di erent information modalities
for each user. That gives our framework great capability for personalized downstream analysis and
recommendation. Our work can bring research attention to identifying and removing misleading
information modality due to di erences in social network user behavior, and paves the way for more
explainable, reliable, and e ective social media user representation learning.</p>
      <p>Some possible future extensions include adding more modalities other than text and graphs (e.g.,
image and video data from user’s posts). Also, we consider the user identities to be static throughout
our analysis, which might not be the case in many scenarios. We can bring time as a factor to produce a
multi-modality dynamic social media user embedding. For example, a user’s text content may be more
trustworthy in the rst few months, and interactive graph structure information becomes more reliable
in longer terms.
11https://twitter.com/katyperry/status/1533246681910628352
12https://twitter.com/jtimberlake/status/1025867320407846912
13https://twitter.com/jtimberlake/status/768191007036891136
14https://twitter.com/taylorswift13/status/1266392274549776387</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W. X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Y.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-R.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Connecting social media to e-commerce: Cold-start product recommendation using microblogging information</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>28</volume>
          (
          <year>2016</year>
          )
          <fpage>1147</fpage>
          -
          <lpage>1159</lpage>
          . doi:
          <volume>10</volume>
          .1109/TKDE.
          <year>2015</year>
          .
          <volume>2508816</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <article-title>Graph neural networks for social recommendation</article-title>
          ,
          <source>in: The World Wide Web Conference</source>
          , WWW '19,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , pp.
          <fpage>417</fpage>
          -
          <lpage>426</lpage>
          . URL: https://doi.org/10.1145/3308558.3313488. doi:
          <volume>10</volume>
          .1145/3308558.3313488.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Preo</surname>
          </string-name>
          iuc-Pietro,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hopkins</surname>
          </string-name>
          , L. Ungar,
          <article-title>Beyond binary labels: Political ideology prediction of Twitter users</article-title>
          ,
          <source>in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , Vancouver, Canada,
          <year>2017</year>
          , pp.
          <fpage>729</fpage>
          -
          <lpage>740</lpage>
          . URL: https://aclanthology.org/P17-1068. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P17</fpage>
          -1068.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Goldwasser</surname>
          </string-name>
          ,
          <article-title>Twitter user representation using weakly supervised graph embedding</article-title>
          ,
          <source>Proceedings of the International AAAI Conference on Web and Social Media</source>
          <volume>16</volume>
          (
          <year>2022</year>
          )
          <fpage>358</fpage>
          -
          <lpage>369</lpage>
          . URL: https://ojs.aaai.org/index.php/ICWSM/article/view/19298. doi:
          <volume>10</volume>
          .1609/icwsm.v16i1.
          <fpage>19298</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ren</surname>
          </string-name>
          , E. Ferrara,
          <article-title>Retweet-bert: Political leaning detection using language features and information di usion on social networks</article-title>
          ,
          <source>Proceedings of the International AAAI Conference on Web and Social Media</source>
          <volume>17</volume>
          (
          <year>2023</year>
          )
          <fpage>459</fpage>
          -
          <lpage>469</lpage>
          . URL: https://ojs.aaai.org/index.php/ICWSM/article/view/ 22160. doi:
          <volume>10</volume>
          .1609/icwsm.v17i1.
          <fpage>22160</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Varol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Menczer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Flammini</surname>
          </string-name>
          ,
          <article-title>Online human-bot interactions: Detection, estimation, and characterization</article-title>
          ,
          <source>Proceedings of the International AAAI Conference on Web and Social Media</source>
          <volume>11</volume>
          (
          <year>2017</year>
          )
          <fpage>280</fpage>
          -
          <lpage>289</lpage>
          . URL: https://ojs.aaai.org/index.php/ICWSM/article/view/14871. doi:
          <volume>10</volume>
          .1609/icwsm.v11i1.
          <fpage>14871</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kudugunta</surname>
          </string-name>
          , E. Ferrara,
          <article-title>Deep neural networks for bot detection</article-title>
          ,
          <source>Information Sciences 467</source>
          (
          <year>2018</year>
          )
          <fpage>312</fpage>
          -
          <lpage>322</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S0020025518306248. doi:https: //doi.org/10.1016/j.ins.
          <year>2018</year>
          .
          <volume>08</volume>
          .019.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L. H. X.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Carley</surname>
          </string-name>
          ,
          <article-title>Botbuster: Multi-platform bot detection using a mixture of experts</article-title>
          ,
          <source>Proceedings of the International AAAI Conference on Web and Social Media</source>
          <volume>17</volume>
          (
          <year>2023</year>
          )
          <fpage>686</fpage>
          -
          <lpage>697</lpage>
          . URL: https://ojs.aaai.org/index.php/ICWSM/article/view/22179. doi:
          <volume>10</volume>
          .1609/icwsm.v17i1.
          <fpage>22179</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Heterogeneous graph network embedding for sentiment analysis on social media</article-title>
          ,
          <source>Cognitive Computation 13</source>
          (
          <year>2021</year>
          )
          <fpage>81</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ma</surname>
          </string-name>
          , C. Zhang,
          <article-title>Social bots detection via fusing bert and graph convolutional networks</article-title>
          ,
          <source>Symmetry</source>
          <volume>14</volume>
          (
          <year>2021</year>
          )
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Socialgcn:</surname>
          </string-name>
          <article-title>An e cient graph convolutional network based model for social recommendation</article-title>
          , CoRR abs/
          <year>1811</year>
          .02815 (
          <year>2018</year>
          ). URL: http://arxiv. org/abs/
          <year>1811</year>
          .02815. arXiv:
          <year>1811</year>
          .02815.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Xu,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Network embedding by fusing multimodal contents and links</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>171</volume>
          (
          <year>2019</year>
          )
          <fpage>44</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Kipf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Welling</surname>
          </string-name>
          ,
          <article-title>Semi-supervised classi cation with graph convolutional networks</article-title>
          ,
          <source>in: International Conference on Learning Representations</source>
          ,
          <year>2017</year>
          . URL: https://openreview.net/forum? id=
          <fpage>SJU4ayYgl</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ying</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leskovec</surname>
          </string-name>
          ,
          <article-title>Inductive representation learning on large graphs</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Velickovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cucurull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Casanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          , et al.,
          <article-title>Graph attention networks</article-title>
          , stat
          <volume>1050</volume>
          (
          <year>2017</year>
          )
          <fpage>10</fpage>
          -
          <lpage>48550</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pennington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          , Glove:
          <article-title>Global vectors for word representation</article-title>
          ,
          <source>in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>1532</fpage>
          -
          <lpage>1543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          , in: J.
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Doran</surname>
          </string-name>
          , T. Solorio (Eds.),
          <source>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <source>Association for Computational Linguistics</source>
          , Minneapolis, Minnesota,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . URL: https://aclanthology.org/N19-1423. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          -1423.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , G. Cui,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Graph neural networks: A review of methods and applications</article-title>
          ,
          <source>AI</source>
          open
          <volume>1</volume>
          (
          <year>2020</year>
          )
          <fpage>57</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>E.</given-names>
            <surname>Mustafaraj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Finn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Whitlock</surname>
          </string-name>
          , P. T. Metaxas,
          <article-title>Vocal minority versus silent majority: Discovering the opionions of the long tail</article-title>
          ,
          <source>in: 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>110</lpage>
          . doi:
          <volume>10</volume>
          .1109/PASSAT/SocialCom.
          <year>2011</year>
          .
          <volume>188</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>G. Blank,</surname>
          </string-name>
          <article-title>The digital divide among twitter users and its implications for social research</article-title>
          ,
          <source>Social Science Computer Review</source>
          <volume>35</volume>
          (
          <year>2017</year>
          )
          <fpage>679</fpage>
          -
          <lpage>697</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          , Timme:
          <article-title>Twitter ideology-detection via multi-task multirelational embedding</article-title>
          ,
          <source>in: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, KDD '20</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2020</year>
          , pp.
          <fpage>2258</fpage>
          -
          <lpage>2268</lpage>
          . URL: https://doi.org/10.1145/3394486.3403275. doi:
          <volume>10</volume>
          .1145/ 3394486.3403275.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Luo</surname>
          </string-name>
          , Twibot-20:
          <article-title>A comprehensive twitter bot detection benchmark</article-title>
          ,
          <source>Proceedings of the 30th ACM International Conference on Information &amp; Knowledge Management</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Calais</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Almeida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Meira</surname>
          </string-name>
          <string-name>
            <surname>Jr</surname>
          </string-name>
          ,
          <article-title>Characterizing and detecting hateful users on twitter</article-title>
          ,
          <source>in: Proceedings of the International AAAI Conference on Web and Social Media</source>
          , volume
          <volume>12</volume>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Anrl: attributed network representation learning via deep neural networks</article-title>
          .,
          <source>in: Ijcai</source>
          , volume
          <volume>18</volume>
          ,
          <year>2018</year>
          , pp.
          <fpage>3155</fpage>
          -
          <lpage>3161</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>L.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T.-S. Chua,
          <article-title>Attributed social network embedding</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>30</volume>
          (
          <year>2018</year>
          )
          <fpage>2257</fpage>
          -
          <lpage>2270</lpage>
          . doi:
          <volume>10</volume>
          .1109/TKDE.
          <year>2018</year>
          .
          <volume>2819980</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zha</surname>
          </string-name>
          ,
          <article-title>User-guided hierarchical attention network for multi-modal social image popularity prediction</article-title>
          ,
          <source>in: Proceedings of the 2018 world wide web conference</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1277</fpage>
          -
          <lpage>1286</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>A two-stage embedding model for recommendation with multimodal auxiliary information</article-title>
          ,
          <source>Information Sciences 582</source>
          (
          <year>2022</year>
          )
          <fpage>22</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pan</surname>
          </string-name>
          , T. Ding,
          <article-title>Social media-based user embedding: A literature review</article-title>
          ,
          <source>in: Proceedings of the Twenty-Eighth International Joint Conference on Arti cial Intelligence, IJCAI-19, International Joint Conferences on Arti cial Intelligence Organization</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6318</fpage>
          -
          <lpage>6324</lpage>
          . URL: https://doi. org/10.24963/ijcai.
          <year>2019</year>
          /881. doi:
          <volume>10</volume>
          .24963/ijcai.
          <year>2019</year>
          /881.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Do</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tsiligianni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cornelis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Deligiannis</surname>
          </string-name>
          ,
          <article-title>Twitter user geolocation using deep multiview learning</article-title>
          , CoRR abs/
          <year>1805</year>
          .04612 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1805</year>
          .04612. arXiv:
          <year>1805</year>
          .04612.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>X.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-Y.</given-names>
            <surname>Ming</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-L.</given-names>
            <surname>Zhao</surname>
          </string-name>
          , T.-S. Chua,
          <article-title>Volunteerism tendency prediction via harvesting multiple social networks</article-title>
          ,
          <source>ACM Trans. Inf. Syst</source>
          .
          <volume>34</volume>
          (
          <year>2016</year>
          ). URL: https://doi.org/10.1145/2832907. doi:
          <volume>10</volume>
          .1145/2832907.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>A.</given-names>
            <surname>Benton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Arora</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dredze</surname>
          </string-name>
          ,
          <article-title>Learning multiview embeddings of twitter users, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics</article-title>
          (Volume
          <volume>2</volume>
          :
          <string-name>
            <surname>Short</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <year>2016</year>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>19</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <article-title>Graph convolutional networks for text classi cation</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on arti cial intelligence</source>
          , volume
          <volume>33</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>7370</fpage>
          -
          <lpage>7377</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Sun,</surname>
          </string-name>
          <article-title>User preference-aware fake news detection</article-title>
          ,
          <source>CoRR abs/2104</source>
          .12259 (
          <year>2021</year>
          ). URL: https://arxiv.org/abs/2104.12259. arXiv:
          <volume>2104</volume>
          .
          <fpage>12259</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schlichtkrull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Kipf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bloem</surname>
          </string-name>
          , R. Van Den Berg, I. Titov,
          <string-name>
            <given-names>M.</given-names>
            <surname>Welling</surname>
          </string-name>
          ,
          <article-title>Modeling relational data with graph convolutional networks, in: The semantic web: 15th international conference</article-title>
          ,
          <source>ESWC</source>
          <year>2018</year>
          , Heraklion, Crete, Greece, June 3-7,
          <year>2018</year>
          , proceedings 15, Springer,
          <year>2018</year>
          , pp.
          <fpage>593</fpage>
          -
          <lpage>607</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Corrado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Distributed representations of words and phrases and their compositionality</article-title>
          , in: C.
          <string-name>
            <surname>Burges</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Bottou</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Welling</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Ghahramani</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Weinberger (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>26</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2013</year>
          . URL: https://proceedings.neurips.cc/paper_ les/paper/2013/ le/ 9aa42b31882ec039965f3c4923ce901b-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iyyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gardner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <article-title>Deep contextualized word representations</article-title>
          , in: M.
          <string-name>
            <surname>Walker</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Ji</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Stent (Eds.),
          <source>Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , New Orleans, Louisiana,
          <year>2018</year>
          , pp.
          <fpage>2227</fpage>
          -
          <lpage>2237</lpage>
          . URL: https://aclanthology.org/N18-1202. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N18</fpage>
          -1202.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , .
          <string-name>
            <surname>Kaiser</surname>
            ,
            <given-names>I. Polosukhin</given-names>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>T.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ryder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Subbiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Kaplan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhariwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Neelakantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shyam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Herbert-Voss</surname>
          </string-name>
          , G. Krueger,
          <string-name>
            <given-names>T.</given-names>
            <surname>Henighan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ziegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Winter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hesse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          , E. Sigler,
          <string-name>
            <given-names>M.</given-names>
            <surname>Litwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chess</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Berner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>McCandlish</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <article-title>Language models are few-shot learners</article-title>
          , in: H.
          <string-name>
            <surname>Larochelle</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ranzato</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Hadsell</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Balcan</surname>
          </string-name>
          , H. Lin (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>33</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2020</year>
          , pp.
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          . URL: https://proceedings.neurips.cc/paper_ les/paper/2020/ le/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ouyang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Almeida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Wainwright</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mishkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Agarwal,
          <string-name>
            <given-names>K.</given-names>
            <surname>Slama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schulman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kelton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Simens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Welinder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Christiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Leike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lowe</surname>
          </string-name>
          ,
          <article-title>Training language models to follow instructions with human feedback</article-title>
          ,
          <source>in: Proceedings of the 36th International Conference on Neural Information Processing Systems</source>
          , NIPS '22, Curran Associates Inc.,
          <string-name>
            <surname>Red</surname>
            <given-names>Hook</given-names>
          </string-name>
          ,
          <string-name>
            <surname>NY</surname>
          </string-name>
          , USA,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>G.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Walambe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kotecha</surname>
          </string-name>
          ,
          <article-title>A review on explainability in multimodal deep neural nets</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>59800</fpage>
          -
          <lpage>59821</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2021</year>
          .
          <volume>3070212</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hinthorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Russakovsky</surname>
          </string-name>
          ,
          <article-title>Point and ask: Incorporating pointing into visual question answering</article-title>
          , CoRR abs/
          <year>2011</year>
          .13681 (
          <year>2020</year>
          ). URL: https://arxiv.org/abs/
          <year>2011</year>
          .13681. arXiv:
          <year>2011</year>
          .13681.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohapatra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parikh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Batra</surname>
          </string-name>
          ,
          <article-title>Interpreting visual question answering models</article-title>
          ,
          <source>CoRR abs/1608</source>
          .08974 (
          <year>2016</year>
          ). URL: http://arxiv.org/abs/1608.08974. arXiv:
          <volume>1608</volume>
          .
          <fpage>08974</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. T.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. M.</given-names>
            <surname>Ro</surname>
          </string-name>
          ,
          <article-title>Generation of multimodal justi cation using visual word constraint model for explainable computer-aided diagnosis</article-title>
          , in: K. Suzuki,
          <string-name>
            <given-names>M.</given-names>
            <surname>Reyes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Syeda-Mahmood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Konukoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glocker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wiest</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Greenspan</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Madabhushi (Eds.),
          <source>Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <surname>A</surname>
          </string-name>
          .
          <string-name>
            <surname>-H. Karimi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Schölkopf</surname>
            ,
            <given-names>I. Valera</given-names>
          </string-name>
          ,
          <article-title>Algorithmic recourse: from counterfactual explanations to interventions</article-title>
          ,
          <source>in: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</source>
          , FAccT '21,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2021</year>
          , p.
          <fpage>353</fpage>
          -
          <lpage>362</lpage>
          . URL: https://doi.org/10.1145/3442188.3445899. doi:
          <volume>10</volume>
          .1145/3442188.3445899.
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Hendricks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Darrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Akata</surname>
          </string-name>
          ,
          <article-title>Generating counterfactual explanations with natural language</article-title>
          , CoRR abs/
          <year>1806</year>
          .09809 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1806</year>
          .09809. arXiv:
          <year>1806</year>
          .09809.
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>K.</given-names>
            <surname>Alipour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Schulze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. T.</given-names>
            <surname>Burachas</surname>
          </string-name>
          ,
          <article-title>The impact of explanations on ai competency prediction in vqa</article-title>
          , in: 2020 IEEE International Conference on Humanized Computing and
          <article-title>Communication with Arti cial Intelligence (HCCAI)</article-title>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Faldu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sheth</surname>
          </string-name>
          ,
          <article-title>Semantics of the black-box: Can knowledge graphs help make deep learning systems more interpretable and explainable?</article-title>
          ,
          <source>IEEE Internet Computing</source>
          <volume>25</volume>
          (
          <year>2021</year>
          )
          <fpage>51</fpage>
          -
          <lpage>59</lpage>
          . doi:
          <volume>10</volume>
          .1109/MIC.
          <year>2020</year>
          .
          <volume>3031769</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>D.</given-names>
            <surname>Blakely</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lanchantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <article-title>Time and space complexity of graph convolutional networks</article-title>
          ,
          <source>Accessed on: Dec</source>
          <volume>31</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Brevdo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chollet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gouws</surname>
          </string-name>
          , L. Jones, .
          <string-name>
            <surname>Kaiser</surname>
          </string-name>
          , et al.,
          <article-title>Tensor2tensor for neural machine translation</article-title>
          ,
          <source>in: Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume</source>
          <volume>1</volume>
          : Research Track),
          <year>2018</year>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>199</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. N.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <year>2018</year>
          . URL: https://arxiv.org/abs/
          <year>1810</year>
          .04805.
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>S.</given-names>
            <surname>Longpre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Webson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          , et al.,
          <article-title>The an collection: Designing data and methods for e ective instruction tuning</article-title>
          ,
          <source>arXiv preprint arXiv:2301.13688</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>E.</given-names>
            <surname>Alothali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zaki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Alashwal</surname>
          </string-name>
          ,
          <article-title>Detecting social bots on twitter: a literature review</article-title>
          ,
          <source>in: 2018 International conference on innovations in information technology (IIT)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>175</fpage>
          -
          <lpage>180</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mazza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cresci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Avvenuti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Quattrociocchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tesconi</surname>
          </string-name>
          , Rtbust:
          <article-title>Exploiting temporal patterns for botnet detection on twitter</article-title>
          ,
          <source>in: Proceedings of the 10th ACM Conference on Web Science</source>
          , WebSci '19,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>183</fpage>
          -
          <lpage>192</lpage>
          . URL: https://doi.org/10.1145/3292522.3326015. doi:
          <volume>10</volume>
          .1145/3292522.3326015.
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. H.</given-names>
            <surname>Lam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Porter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Detecting political biases of named entities and hashtags on twitter</article-title>
          ,
          <source>EPJ Data Science</source>
          <volume>12</volume>
          (
          <year>2023</year>
          )
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>K.</given-names>
            <surname>Brown</surname>
          </string-name>
          , A. Mondon,
          <article-title>Populism, the media, and the mainstreaming of the far right: The guardian coverage of populism as a case study</article-title>
          ,
          <source>Politics</source>
          <volume>41</volume>
          (
          <year>2021</year>
          )
          <fpage>279</fpage>
          -
          <lpage>295</lpage>
          . URL: https://doi.org/10.1177/0263395720955036. doi:
          <volume>10</volume>
          .1177/0263395720955036. arXiv:https://doi.org/10.1177/0263395720955036.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>