<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Multilingual Analysis of YouTube's Recom mendation System: Examining Topic and Emotion Drift in the 'Cheng Ho' Narrative</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ugochukwu Onyepunuka</string-name>
          <email>uponyepunuka@ualr.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mustafa Alassad</string-name>
          <email>mmalassad@ualr.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lotenna Nwana</string-name>
          <email>ltnwana@ualr.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nitin Agarwal</string-name>
          <email>nxagarwal@ualr.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>COSMOS Research Center, University of Arkansas - Little Rock</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>YouTube is a major source of information for many users and its recommendation algorithm is pivotal in video discovery and viewership on the platform. It is responsible for 70% of the content users engage with on its platform, and the type of information users are exposed to. The importance of scrutinizing recommendation systems, to understand how potential algorithmic bias may afect users in spreading disinformation, cannot be overemphasized. Previous studies have shown that the recommendation algorithm has an inherent tendency of bias toward a small fraction of videos, and it pushes users into mild ideological echo chambers. This study aims to determine the extent to which YouTube's recommendation algorithm spreads disinformation, using the Cheng Ho narrative. Cheng Ho was a Muslim Chinese naval admiral in the 15th century, nicknamed the “Chinese Columbus”. He was a symbol of China's Islamic diplomacy and peaceful ascendancy to power. To achieve the study's aim, a list of 50 videos on Cheng Ho was collected by passing relevant keywords to YouTube's search API. These 50 videos served as the seed for the recommendations, with 58,825 unique videos collected through 5 depths of recommendations. We computed the topic drift on the recommendation depths and discovered that the recommendations led us further away from the original topic. Furthermore, observing the eigenvector centrality values of videos within the recommendation network of diferent depths, we saw the evolution of influential videos as their relevance to Cheng Ho diminished. The results showed how YouTube's recommendation system discards the topics of the seed videos by subtly introducing a new but still pro-China topic in the network through influential videos. This new topic is about economic growth and religious freedom in China targeting Indonesia's younger demographic by focusing on current events and pop culture. This study sets the stage for further research in analyzing bias in recommendation algorithms, their exploitation by information actors, their impact on mis/disinformation propagation, and their efect on user consumption.</p>
      </abstract>
      <kwd-group>
        <kwd>Cheng Ho</kwd>
        <kwd>China</kwd>
        <kwd>disinformation</kwd>
        <kwd>recommendation bias</kwd>
        <kwd>topic drift</kwd>
        <kwd>YouTube</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        YouTube is the largest video-sharing platform today, with over 6 billion hours of videos watched
by its visitors every month [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It has become a major source of information for most people, as
over 2.5 billion active users use it monthly [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The content on YouTube covers a vast range of
categories, including education, sports, politics, and religion. YouTube recommendation was
designed to suggest content based on a user’s current and previous viewings. The
recommendation algorithm on the platform is intended to maximize user retention by suggesting content
appealing to a user’s interest. With YouTube being a critical source of information for many
users and its recommendation algorithm driving the type of information users are exposed to, it
is important to scrutinize the recommendation system to understand how potential algorithmic
bias may afect users. Consequences of algorithmic bias, such as misinformation and
disinformation could have great impacts on discrimination against marginalized communities. For this
study, we analyze the algorithm for the presence of bias, and if the detected bias aids in the
spread of the Cheng Ho propaganda narrative.
      </p>
      <p>
        Cheng Ho, also known as Zheng He, was a Chinese naval admiral who commanded naval
voyages in the early 15th century through Southeast Asia, India, and the Middle East [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The
Cheng Ho mythology seeks to advance China’s Islamic diplomacy by portraying him as a
benevolent giver who spread Islam across Southeast Asia [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The manipulation of the
Cheng Ho narrative is intended to increase regional support for China’s ”Maritime Silk Road”
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and as a response to their oppression of Uyghur Muslims [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Accompanied by the public
scrutiny of China’s oppression of the Uyghur Muslims, the Chinese Communist Party (CCP)
has invested in reviving and manipulating the Cheng Ho myth [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The Cheng Ho myth has
been weaved into a symbol of China’s peaceful ascendancy to power while portraying China’s
economic, military, and naval prowess [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Peggy-Jean et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] highlight the importance of
this disinformation campaign to China’s geopolitical ambitions in the South China Sea. The
tactics deployed by these disinformation actors rely on constructing collective memory by
repackaging history in a fictional and false manner [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Furthermore, Geof Wade et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
present a revisionist view of Cheng Ho’s voyages and argue that the voyages of friendships
were an aggressive attempt to establish China’s dominance in places like Vietnam.
      </p>
      <p>The motivation for this study comes from the need to highlight the extent of algorithmic
bias in spreading disinformation. Through this research, we found a set of highly-influential
YouTube videos that acted as attractors in the recommendation network. In a further analysis,
we examined the characteristics of these attractors and determined their relevance to the Cheng
Ho narrative. A multilingual analysis of the video recommendations was also performed to
compare patterns that may exist in diferent cultural contexts.</p>
      <p>The results showed that content related to our seed videos was filtered out across
recommendations. Simultaneously, new content unrelated to the seed videos were introduced into the
network through attractors. These new content identified as ’pro-China’ topics in the network,
which focused on economic growth and religious freedom in China, also targeted Indonesia’s
younger demographic by incorporating current events and pop culture.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>This section presents related works on, topic drift, recommendation bias, disinformation, and
radicalization. In recent years the algorithmic bias of recommendation engines has been studied
to understand the extent of their contribution to spreading misinformation, leading users into
echo chambers, polarization, and radicalization.</p>
      <p>
        Disinformation is intentional false information designed to deceive or mislead people [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
Undoubtedly, political disinformation narratives are intended to influence people’s perceptions
of reality to advance authoritarian and populist agendas [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Additionally, it aims to inflame
polarization and discrimination against marginalized communities, subvert human rights
defenders and human rights processes, and discredit facts [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Despite its negative impact on
society, little research has been conducted to understand its heterogeneous efect on minority
groups. Inspired by this, Neo et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] analyzed the qualitative efects of disinformation
on Indonesia’s racial, ethnic, and sexual minority communities. The study was conducted
on data from interviews with Indonesian citizens belonging to minority groups. It revealed
how dominant social groups had utilized disinformation as a tool to gather various types of
political and religious capital and socially control and punish minority groups. Furthermore,
they concluded that given the spread of fake news through social media platforms, increased
regulation of these platforms has the capacity to mitigate the efects of disinformation and
foster a healthier society [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Brown et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] analyzed the extent to which YouTube pushes users into rabbit holes or echo
chambers of ideologically biased content. To achieve this, they developed a method to estimate
the ideology of videos and avoided user personalization from the recommendation algorithm.
They discovered that the recommendation algorithm pushes users into mild echo chambers, but
insuficient evidence indicates that it leads them down the rabbit hole of ideologically extreme
content [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In conclusion, they observed that the longer users follow the recommendations
they are pulled into a narrow range of ideological content regardless of the users’ ideology.
Kirdemir et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] described similar findings showing YouTubes recommendation bias in favor
of a streamlined set of contents under generalizable conditions.
      </p>
      <p>
        Heuer et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] research aim was to audit the bias that emanates from YouTube’s
recommendation algorithm on political topics in Germany. They followed ten chains of recommendations
per video on the political topic to examine potential recommendation bias from YouTube. Their
ifndings suggest that YouTube enacts strong popularity bias in its recommendations, but the
videos are topically dissimilar or unrelated to the original narrative [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The researchers also
discovered emotion drift from sadness to happiness in the recommendations and examined the
relationship between content popularity and emotions.
      </p>
      <p>
        Other related works have employed the use of human annotators in content analysis to
quantify topic and emotion drift. This study presents a systematic computational approach
including Topic Modeling [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], Network Analysis [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], and Hellinger Distance Score [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] to
examine the extent of YouTube’s recommendation bias in spreading a disinformation narrative.
Unlike other works, this study provides a multilingual analysis of the recommendation algorithm
to compare patterns in diferent cultural contexts. Through this study, we examined the evolution
of a YouTube recommendation network on a disinformation narrative (Cheng Ho) as it progresses
through the recommendation depths.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>This section describes the data collection process and the methodology utilized in this study.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Collection</title>
        <p>
          YouTube API crawler described by Kready et al. [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] was used to collect over 1000 videos based
on the following keyphrases:
        </p>
        <p>“‘Cheng Ho’/’Zheng He’+laksamana+damai”, ” ‘Sam Po Kong’+Islam+Indonesia”, ” ‘1421 Saat
China Menemukan Dunia’+’Gavin Menzies’”, and ”1421 Saat China Menemukan Dunia”.</p>
        <p>The data collected was written to a MySQL database, where the video titles were queried
with a full-text search of the keyphrases. Furthermore, we extracted the top 50 most viewed
videos to limit the results.</p>
        <p>Similarly, we adopted a custom crawler to collect five depersonalized video recommendations
on each seed video, producing the 1st depth of recommendations. Furthermore, this process was
repeated four more times, with videos from the previous depths used as inputs to generate the
next depths. The depersonalization was done to evaluate the raw recommendation algorithm
without the influence of the user history. Table 1. highlights the number of videos collected at
each depth.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Topic Drift</title>
        <p>
          By analyzing the topic drift, we aimed to determine if recommendations deviate from the
original topic and investigate the content similarity that occurs in YouTube’s recommendation
system as it progresses through five depths of recommendations. Topic drift occurs when the
topics of the recommended videos deviate from the original topics found in the seed videos. To
measure topic drift, the titles of unique videos across all depths were concatenated to create a
master corpus. The master corpus was subjected to an LDA topic model [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] to determine the
topics present, which revealed 10 topics in the master corpus. Next, a corpus was created for
each depth by combining the titles of all the videos in a depth. Furthermore, the corpus for each
depth was passed through the LDA topic model to generate their topic probability distribution
(likelihood for a corpus to belong to a topic). Finally, to measure content similarity between
two depths we applied the Hellinger Distance metric [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] to their topic probability distribution.
The Hellinger distance formula is a mathematical formula used to measure the similarity or
dissimilarity between two probability distributions. Given two probability distributions P and
Q, the Hellinger distance H(P, Q) is defined as:
 ( , ) =
(1)
        </p>
        <p>The Hellinger distance ranges from 0 (when P and Q are identical) to 1 (when P and Q have
no common outcomes). The topic drifts were measured in two ways:
• Seed-to-depth distance: It quantifies the topic similarity between the seed videos (depth 0)
and each depth (e.g depth 0&amp; depth 1; depth 0 &amp; depth 2; depth 0 &amp; depth 3) by measuring
the distance between their topic probability distribution.
• Inter-depth distance: It quantifies the topic similarity between adjacent depths (e.g. depth
0 &amp; depth 1; depth 1 &amp; depth 2; depth 2 &amp; depth 3) by measuring the distance between
their topic probability distribution.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Network Analysis</title>
        <p>
          We performed a network analysis to understand the network structure within the
recommendation depths. The aim was to observe how the recommendation network evolves as we progress
through the recommendation depths. To analyze the recommendation network, we created a
network graph for each depth and analyzed it using Gephi [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The recommendation network
consists of videos as vertices and recommendations as directed edges. Next, we used the
modularity measure [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] on Gephi to identify communities within the network, which enabled us
to isolate the videos in a community and summarize the videos’ characteristics to understand
their uniqueness. We utilized the eigenvector centrality value (eigenvalues) to measure the
transitive influence of nodes in the network [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. The value (0 to 1) of a node is dependent
on how well-connected a node is to other well-connected nodes. Applying the eigenvector
centrality measure to the network helped identify the influential videos (high eigenvalues)
acting as attractors in the recommendation network. These videos with high eigenvalues are
favored by the algorithm with a propensity to be recommended. Furthermore, we determined
the similarity of the influential videos to the original narrative.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Emotion Analysis</title>
        <p>
          Next, we performed an emotional analysis to examine the emotions embedded in the
recommendations and measure emotion drift (the divergence of emotions from the original narrative). This
allowed us to further assess how the Youtube recommendation algorithm considers emotions
[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Emotion analysis involves detecting the true emotions behind a text. For efective text
analysis, the emotion analysis technique employs natural language processing, information
extraction techniques, and a pre-trained language representation model [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Manuel Romero’s
T5-based-finetuned-emotion model [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] was used to identify the emotions of video titles, using
transfer learning which preserves previous information, thereby improving the speed and
accuracy of training. This produced a probability score for six emotions: anger, fear, joy, love,
surprise, and sadness. Furthermore, we computed the distribution of emotions for each depth
and visualized it on a line graph, allowing us to see the emotion trends as we traverse through
the recommendation depths. This process was conducted on the subset of Indonesian and
English video titles for each depth, converting the Indonesian video title to English before
passing it as input into the emotion model.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>This section discusses our findings at each stage of our methodology.</p>
      <sec id="sec-4-1">
        <title>4.1. Topic Drift Analysis of Recommended Videos</title>
        <p>The topic drift analysis aims to determine if YouTube’s recommendations deviate from the
original topic as it progresses through five depths of recommendations. A high Hellinger
distance score signifies low similarity between two probability distributions and vice versa.
Therefore, topic drift occurs when there is a sequential increase in the Hellinger distance
between two topic probability distributions. From the line chart, an upward trend indicates
topic drift (increasing Hellinger distance) explaining that there is decreasing similarity.</p>
        <p>After collecting the recommendations for five depths we calculated the language distribution
of videos in each depth. Consequently, we saw that Indonesian and English had the largest
percentage, 55.5%, and 34.6% respectively, thereby influencing our decision to perform a
comparative analysis of the two languages. The topic drift methodology described above was applied
to a subset of these languages across depths. Since our seed videos were all Indonesian, we
translated them into English and used them to calculate the topic drift of the English videos</p>
        <p>In Figure 1 (a &amp; b), we measured the topic similarity of the recommended videos between
adjacent depths (the inter-depth distance). Looking at the results of the inter-depth distance of
English and Indonesian videos in Figure 1(a &amp; b), we observed a declining Hellinger distance
score. From the graphs, the maximum distance occurred between the seed (0) and depth 1,
with a constant decline to the end. This explains that as we progress through the
recommendation depths the content similarity between two adjacent depths increases. It tells us that the
recommendation becomes more similar to each other.</p>
        <p>Furthermore, the seed-to-depth Hellinger distance score for English and Indonesian videos
in Figure 1 (c &amp; d), was used to determine the relevance of the recommendations in each depth
to the original narrative. From the graphs, we observed that the distance between the seed and
the recommendations increases as we progress through the depths. The observed continuous
topic drift explains that the recommended videos were becoming more dissimilar from the seed
as we moved through the depths.</p>
        <p>Combining the results of the two topic drift views, suggests that regardless of each depth’s
increasing dissimilarity to the seed (seed-depth distance), YouTube’s recommendation engine
strives to keep the content between each adjacent depth similar (inter-depth distance). This
explains how users are gradually exposed to videos irrelevant to the seed. We realized a higher
magnitude in topic drifts for English videos over Indonesian videos.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Analysis of the Recommendation Network</title>
        <p>Next, we analyzed each depth’s network to identify the influential nodes with a higher
propensity to be recommended by the algorithm. The videos in each depth were ranked with their
(a)
(b)
(c)
(d)
eigenvector centrality value, which determines a node’s transitive influence in the network. We
selected and examined the top 3 most influential videos based on the eigenvalues. By examining
the network’s influential nodes, we confirm the kinds of videos that afect recommendations and
how a video’s influence evolves as we go through the depths. We observed how the influential
nodes in the network evolve from relevant videos to irrelevant videos about the Cheng Ho topic.
The topics of the influential nodes drift from the original narrative and a new topic (“Cha Guan”)
is subtly introduced at depths 2 and 3, with a complete shift at depth 4. “Cha Guan” is a segment
on the “Asumi” channel, a media-tech institution aimed at Indonesia’s younger demographic,
with a focus on current events and pop culture. Notably, the videos on this channel discussed
the economy and religious freedom in China. Still observing the evolution of influential videos
in the network, we noticed a shift in language at depth 5, from Indonesian to English. In addition
to the language change amongst influential videos at depth 5, the distribution of eigenvector
scores was significantly lower.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Emotion Drift Analysis</title>
        <p>
          Analyzing the emotions attached to videos at diferent recommendation depths and whether
they remain similar to the original narrative aids to assess the impact of emotions on YouTube’s
recommendations. Performing a comparative analysis of the emotion drift on Indonesian and
English videos across all depths, allows us to identify patterns that may exist in the diferent
language contexts. However, in the emotion drift charts of Indonesian and English videos shown
in Figure 5, we see that a similar trend exists between them. Joy is the most prominent emotion
exhibited in the video titles of the diferent languages. This could be due to the likelihood for
users to engage more with positive emotions than negative ones [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] or that it aligns with the
disinformation propaganda of the Cheng Ho narrative driven by China. Further analysis of the
impact of emotions on video recommendations is conducted in future work by comparing the
emotion drifts on competing narratives.
        </p>
        <p>The results from the topic drift analysis show high content similarity in the recommendations
between depths, but low content similarity with the seeds. Accordingly, the increased similarity
between recommendation depths and their divergence from the seed videos is indicative of
bias toward a coordinated pool of videos with no relevance to the original topic. Furthermore,
analyzing the network with the eigenvector centrality measure revealed influential videos at
each depth. Additionally, the emotion analysis performed on the data showed joy as the most
Depth 1</p>
        <p>Depth 2</p>
        <p>Depth 3
Depth 5
Depth 4
prominent emotion, which aligns with emotions attached to the Cheng Ho narrative.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>From our analysis, we find no evidence afirming the contributions of YouTube’s
recommendations toward spreading disinformation. The results of our topic drift analysis show that as we
progressively step through YouTube’s recommendations, we get further away from our original
narrative. However, we observed an increased similarity between the adjacent recommendation
depths. The increased similarity between recommendations and their divergence from the seed
videos is suggestive of a bias towards a streamlined pool of videos with no relevance to the
original topic. This is evident in the results of our network analysis, where we see how the
content of the influential videos switches from our original topic to videos about “Cha Guan”.
From our comparison of Indonesian and English topic drifts, we observed similar patterns
between them, but the English videos had no relevance to our original topic. The emotional
analysis conducted on the recommended videos showed joy as the most prominent emotion,
this could either be due to the likelihood for users to engage more with positive emotions
(a)
(b)
than negative ones or tie into the theme of the China-driven Cheng Ho narrative. From our
ifndings, we conclude that YouTube’s recommendation algorithm does not aid the spread of
disinformation, but that as we step through recommendations the relevance of their contents to
the original narrative is reduced.</p>
      <p>Future directions for this research include evaluating topic drifts on diferent levels of video
text including video description, transcript, and a concatenation of video title and description.
Other directions may involve building a predictive model from the video characteristics
(engagement stats, eigenvector score, topic distribution, etc.) at each depth to determine the likelihood
of a video being recommended by another video.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This research is funded in part by the U.S. National Science Foundation (OIA-1946391,
OIA1920920, IIS-1636933, ACI-1429160, and IIS-1110868), U.S. Ofice of the Under Secretary of
Defense for Research and Engineering (FA9550-22-1-0332), U.S. Ofice of Naval Research
(N0001410-1-0091, N00014-14-1-0489, N00014-15-P-1187, N00014-16-1-2016, N00014-16-1-2412,
N0001417-1-2675, N00014-17-1-2605, N68335-19-C-0359, N00014-19-1-2336, N68335-20-C-0540,
N0001421-1-2121, N00014-21-1-2765, N00014-22-1-2318), U.S. Air Force Research Laboratory, U.S. Army
Research Ofice (W911NF-20-1-0262, W911NF-16-1-0189, W911NF-23-1-0011), U.S. Defense
Advanced Research Projects Agency (W31P4Q-17-C-0059), Arkansas Research Alliance, the
Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock, and the
Australian Department of Defense Strategic Policy Grants Program (SPGP) (award number:
2020-106-094). Any opinions, findings, and conclusions or recommendations expressed in
this material are those of the authors and do not necessarily reflect the views of the funding
organizations. The researchers gratefully acknowledge the support.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] YouTube: What is YouTube?,
          <year>2022</year>
          . URL: https://edu.gcfglobal.org/en/youtube/ what-is-youtube/1/.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[2] Biggest social media platforms</source>
          <year>2022</year>
          ,
          <year>2022</year>
          . URL: https://www.statista.com/statistics/272014/ global-social
          <article-title>-networks-ranked-by-number-of-users/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.-J.</given-names>
            <surname>Allin</surname>
          </string-name>
          , S. Corman, “
          <article-title>China's Columbus” Was an Imperialist Too: Contesting the Myth of Zheng He (</article-title>
          <year>2022</year>
          ). URL: https://smallwarsjournal.com/jrnl/art/ chinas-columbus
          <article-title>-was-imperialist-too-contesting-myth-zheng-he.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[4] 'Fake news of the century': The Muslim explorer China deploys while persecuting Muslims</article-title>
          ,
          <source>ABC News</source>
          (
          <year>2019</year>
          ). URL: https://www.abc.net.au/news/2019-09-22/ zheng-he
          <article-title>-chinese-islam-explorer-belt-and-road/11471758.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Zheng</given-names>
            <surname>He</surname>
          </string-name>
          and
          <article-title>the maritime silk road</article-title>
          ,
          <year>2015</year>
          . URL: https://u.osu.edu/mclc/
          <year>2015</year>
          /10/02/ zheng-he
          <article-title>-and-the-maritime-silk-road/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Wade</surname>
          </string-name>
          , The Zheng He Voyages: A Reassessment,
          <source>Journal of the Malaysian Branch of the Royal Asiatic Society</source>
          <volume>78</volume>
          (
          <year>2005</year>
          )
          <fpage>37</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          .2307/41493537.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>China's Repression</surname>
          </string-name>
          of Uyghurs in Xinjiang,
          <year>2022</year>
          . URL: https://www.cfr.org/backgrounder/ china-xinjiang
          <article-title>-uyghurs-muslims-repression-genocide-human-rights.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[8] Misinformation and disinformation</source>
          ,
          <year>2022</year>
          . URL: https://www.apa.org/topics/ journalism-facts/misinformation-disinformation.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] APC, Disinformation and freedom of expression: Submission in response to the call by the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression</article-title>
          | Association for Progressive Communications,
          <year>2021</year>
          . URL: http://bit.ly/3FPHghD.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Ric</surname>
          </string-name>
          <string-name>
            <surname>Neo</surname>
          </string-name>
          , Jason DC Yin,
          <article-title>Of social discipline and control: The impact of fake news and disinformation on minorities in Indonesia | Association for Progressive Communications</article-title>
          ,
          <year>2021</year>
          . URL: https://www.apc.org/en/pubs/ social
          <article-title>-discipline-and-control-impact-fake-news-and-disinformation-minorities-indonesia.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>M. A. Brown</surname>
          </string-name>
          , J.
          <string-name>
            <surname>Bisbee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lai</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Bonneau</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nagler</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Tucker</surname>
          </string-name>
          , Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users,
          <year>2022</year>
          . URL: https://papers.ssrn.com/abstract=4114905. doi:
          <volume>10</volume>
          .2139/ssrn.4114905.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kirdemir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kready</surname>
          </string-name>
          , E. Mead,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hussain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <source>Examining Video Recommendation Bias on YouTube</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>106</fpage>
          -
          <lpage>116</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -78818-6_
          <fpage>10</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Heuer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hoch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Breiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Theocharis</surname>
          </string-name>
          ,
          <article-title>Auditing the Biases Enacted by YouTube for Political Topics in Germany</article-title>
          ,
          <source>in: Mensch und Computer</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp.
          <fpage>456</fpage>
          -
          <lpage>468</lpage>
          . URL: http://arxiv.org/abs/2107.09922. doi:
          <volume>10</volume>
          .1145/3473856.3473864, arXiv:
          <fpage>2107</fpage>
          .09922 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Topic</given-names>
            <surname>Modeling</surname>
          </string-name>
          and Latent Dirichlet Allocation (LDA) |
          <source>DataScience+</source>
          ,
          <year>2022</year>
          . URL: https: //datascienceplus.com
          <article-title>/topic-modeling-and-latent-dirichlet-allocation-lda/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <article-title>Network Analysis - an overview</article-title>
          |
          <source>ScienceDirect Topics</source>
          ,
          <year>2018</year>
          . URL: https://www. sciencedirect.com/topics/social-sciences/
          <article-title>network-analysis.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shemyakin</surname>
          </string-name>
          , Hellinger Distance and
          <string-name>
            <surname>Non-informative Priors</surname>
          </string-name>
          ,
          <source>Bayesian Analysis</source>
          <volume>9</volume>
          (
          <year>2014</year>
          )
          <fpage>923</fpage>
          -
          <lpage>938</lpage>
          . URL: https://projecteuclid.org/journals/bayesian-analysis/volume-9/ issue-4/Hellinger-Distance-
          <article-title>and-</article-title>
          <string-name>
            <surname>Non-</surname>
          </string-name>
          informative-Priors/
          <year>10</year>
          .1214/14-
          <fpage>BA881</fpage>
          .full. doi:
          <volume>10</volume>
          . 1214/14-
          <fpage>BA881</fpage>
          , publisher:
          <source>International Society for Bayesian Analysis.</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kready</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Shimray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Hussain</surname>
          </string-name>
          , N. Agarwal,
          <article-title>YouTube Data Collection Using Parallel Processing</article-title>
          , in: 2020
          <source>IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1119</fpage>
          -
          <lpage>1122</lpage>
          . doi:
          <volume>10</volume>
          .1109/IPDPSW50202.
          <year>2020</year>
          .
          <volume>00185</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18] Learn how to use Gephi,
          <year>2008</year>
          . URL: https://gephi.org/users/.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Data</surname>
            <given-names>Story</given-names>
          </string-name>
          : Gephi - Clustering layout by modularity,
          <year>2014</year>
          . URL: https://parklize.blogspot. com/
          <year>2014</year>
          /12/gephi-clustering
          <article-title>-layout-by-modularity</article-title>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Eigenvector</given-names>
            <surname>Centrality - Neo4j Graph Data Science</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL: https://neo4j.com/docs/ graph-data-
          <source>science/2</source>
          .2/algorithms/eigenvector-centrality/.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>N.</given-names>
            <surname>Shelke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chaudhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chakrabarti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Bangare</surname>
          </string-name>
          , G. Yogapriya,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <article-title>An eficient way of text-based emotion analysis from social media using LRA-DNN, Neuroscience Informatics 2 (</article-title>
          <year>2022</year>
          )
          <article-title>100048</article-title>
          . URL: https://www.sciencedirect.com/science/article/ pii/S2772528622000103. doi:
          <volume>10</volume>
          .1016/j.neuri.
          <year>2022</year>
          .
          <volume>100048</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <source>[22] mrm8488/t5-base-finetuned-emotion · Hugging Face</source>
          ,
          <year>2022</year>
          . URL: https://huggingface.co/ mrm8488/t5
          <article-title>-base-finetuned-emotion.</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>