<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Multilingual Analysis of YouTube&apos;s Recommendation System: Examining Topic and Emotion Drift in the &apos;Cheng Ho&apos; Narrative</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ugochukwu</forename><surname>Onyepunuka</surname></persName>
							<email>uponyepunuka@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="department">COSMOS Research Center</orgName>
								<orgName type="institution">University of Arkansas</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mustafa</forename><surname>Alassad</surname></persName>
							<email>mmalassad@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="department">COSMOS Research Center</orgName>
								<orgName type="institution">University of Arkansas</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lotenna</forename><surname>Nwana</surname></persName>
							<email>ltnwana@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="department">COSMOS Research Center</orgName>
								<orgName type="institution">University of Arkansas</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nitin</forename><surname>Agarwal</surname></persName>
							<email>nxagarwal@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="department">COSMOS Research Center</orgName>
								<orgName type="institution">University of Arkansas</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Multilingual Analysis of YouTube&apos;s Recommendation System: Examining Topic and Emotion Drift in the &apos;Cheng Ho&apos; Narrative</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">9A9D46C3E5A9BEACE2C8DD4D3A7BE313</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-04-29T06:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Cheng Ho</term>
					<term>China</term>
					<term>disinformation</term>
					<term>recommendation bias</term>
					<term>topic drift</term>
					<term>YouTube</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>YouTube is a major source of information for many users and its recommendation algorithm is pivotal in video discovery and viewership on the platform. It is responsible for 70% of the content users engage with on its platform, and the type of information users are exposed to. The importance of scrutinizing recommendation systems, to understand how potential algorithmic bias may affect users in spreading disinformation, cannot be overemphasized. Previous studies have shown that the recommendation algorithm has an inherent tendency of bias toward a small fraction of videos, and it pushes users into mild ideological echo chambers. This study aims to determine the extent to which YouTube's recommendation algorithm spreads disinformation, using the Cheng Ho narrative. Cheng Ho was a Muslim Chinese naval admiral in the 15th century, nicknamed the "Chinese Columbus". He was a symbol of China's Islamic diplomacy and peaceful ascendancy to power. To achieve the study's aim, a list of 50 videos on Cheng Ho was collected by passing relevant keywords to YouTube's search API. These 50 videos served as the seed for the recommendations, with 58,825 unique videos collected through 5 depths of recommendations. We computed the topic drift on the recommendation depths and discovered that the recommendations led us further away from the original topic. Furthermore, observing the eigenvector centrality values of videos within the recommendation network of different depths, we saw the evolution of influential videos as their relevance to Cheng Ho diminished. The results showed how YouTube's recommendation system discards the topics of the seed videos by subtly introducing a new but still pro-China topic in the network through influential videos. This new topic is about economic growth and religious freedom in China targeting Indonesia's younger demographic by focusing on current events and pop culture. This study sets the stage for further research in analyzing bias in recommendation algorithms, their exploitation by information actors, their impact on mis/disinformation propagation, and their effect on user consumption.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>YouTube is the largest video-sharing platform today, with over 6 billion hours of videos watched by its visitors every month <ref type="bibr" target="#b0">[1]</ref>. It has become a major source of information for most people, as over 2.5 billion active users use it monthly <ref type="bibr" target="#b1">[2]</ref>. The content on YouTube covers a vast range of categories, including education, sports, politics, and religion. YouTube recommendation was designed to suggest content based on a user's current and previous viewings. The recommendation algorithm on the platform is intended to maximize user retention by suggesting content appealing to a user's interest. With YouTube being a critical source of information for many users and its recommendation algorithm driving the type of information users are exposed to, it is important to scrutinize the recommendation system to understand how potential algorithmic bias may affect users. Consequences of algorithmic bias, such as misinformation and disinformation could have great impacts on discrimination against marginalized communities. For this study, we analyze the algorithm for the presence of bias, and if the detected bias aids in the spread of the Cheng Ho propaganda narrative.</p><p>Cheng Ho, also known as Zheng He, was a Chinese naval admiral who commanded naval voyages in the early 15th century through Southeast Asia, India, and the Middle East <ref type="bibr" target="#b2">[3]</ref>. The Cheng Ho mythology seeks to advance China's Islamic diplomacy by portraying him as a benevolent giver who spread Islam across Southeast Asia <ref type="bibr" target="#b2">[3]</ref> [4] <ref type="bibr" target="#b4">[5]</ref>. The manipulation of the Cheng Ho narrative is intended to increase regional support for China's "Maritime Silk Road" <ref type="bibr" target="#b4">[5]</ref> <ref type="bibr" target="#b5">[6]</ref>, and as a response to their oppression of Uyghur Muslims <ref type="bibr" target="#b6">[7]</ref>. Accompanied by the public scrutiny of China's oppression of the Uyghur Muslims, the Chinese Communist Party (CCP) has invested in reviving and manipulating the Cheng Ho myth <ref type="bibr">[3] [4]</ref>. The Cheng Ho myth has been weaved into a symbol of China's peaceful ascendancy to power while portraying China's economic, military, and naval prowess <ref type="bibr">[5] [6]</ref>. Peggy-Jean et al. <ref type="bibr" target="#b2">[3]</ref> highlight the importance of this disinformation campaign to China's geopolitical ambitions in the South China Sea. The tactics deployed by these disinformation actors rely on constructing collective memory by repackaging history in a fictional and false manner <ref type="bibr" target="#b2">[3]</ref>. Furthermore, Geoff Wade et al. <ref type="bibr" target="#b5">[6]</ref> present a revisionist view of Cheng Ho's voyages and argue that the voyages of friendships were an aggressive attempt to establish China's dominance in places like Vietnam.</p><p>The motivation for this study comes from the need to highlight the extent of algorithmic bias in spreading disinformation. Through this research, we found a set of highly-influential YouTube videos that acted as attractors in the recommendation network. In a further analysis, we examined the characteristics of these attractors and determined their relevance to the Cheng Ho narrative. A multilingual analysis of the video recommendations was also performed to compare patterns that may exist in different cultural contexts.</p><p>The results showed that content related to our seed videos was filtered out across recommendations. Simultaneously, new content unrelated to the seed videos were introduced into the network through attractors. These new content identified as 'pro-China' topics in the network, which focused on economic growth and religious freedom in China, also targeted Indonesia's younger demographic by incorporating current events and pop culture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature Review</head><p>This section presents related works on, topic drift, recommendation bias, disinformation, and radicalization. In recent years the algorithmic bias of recommendation engines has been studied to understand the extent of their contribution to spreading misinformation, leading users into echo chambers, polarization, and radicalization.</p><p>Disinformation is intentional false information designed to deceive or mislead people <ref type="bibr" target="#b7">[8]</ref>. Undoubtedly, political disinformation narratives are intended to influence people's perceptions of reality to advance authoritarian and populist agendas <ref type="bibr" target="#b8">[9]</ref>. Additionally, it aims to inflame polarization and discrimination against marginalized communities, subvert human rights defenders and human rights processes, and discredit facts <ref type="bibr" target="#b8">[9]</ref>. Despite its negative impact on society, little research has been conducted to understand its heterogeneous effect on minority groups. Inspired by this, Neo et al. <ref type="bibr" target="#b9">[10]</ref> analyzed the qualitative effects of disinformation on Indonesia's racial, ethnic, and sexual minority communities. The study was conducted on data from interviews with Indonesian citizens belonging to minority groups. It revealed how dominant social groups had utilized disinformation as a tool to gather various types of political and religious capital and socially control and punish minority groups. Furthermore, they concluded that given the spread of fake news through social media platforms, increased regulation of these platforms has the capacity to mitigate the effects of disinformation and foster a healthier society <ref type="bibr" target="#b9">[10]</ref>.</p><p>Brown et al. <ref type="bibr" target="#b10">[11]</ref> analyzed the extent to which YouTube pushes users into rabbit holes or echo chambers of ideologically biased content. To achieve this, they developed a method to estimate the ideology of videos and avoided user personalization from the recommendation algorithm. They discovered that the recommendation algorithm pushes users into mild echo chambers, but insufficient evidence indicates that it leads them down the rabbit hole of ideologically extreme content <ref type="bibr" target="#b10">[11]</ref>. In conclusion, they observed that the longer users follow the recommendations they are pulled into a narrow range of ideological content regardless of the users' ideology. Kirdemir et al. <ref type="bibr" target="#b11">[12]</ref> described similar findings showing YouTubes recommendation bias in favor of a streamlined set of contents under generalizable conditions.</p><p>Heuer et al. <ref type="bibr" target="#b12">[13]</ref> research aim was to audit the bias that emanates from YouTube's recommendation algorithm on political topics in Germany. They followed ten chains of recommendations per video on the political topic to examine potential recommendation bias from YouTube. Their findings suggest that YouTube enacts strong popularity bias in its recommendations, but the videos are topically dissimilar or unrelated to the original narrative <ref type="bibr" target="#b12">[13]</ref>. The researchers also discovered emotion drift from sadness to happiness in the recommendations and examined the relationship between content popularity and emotions.</p><p>Other related works have employed the use of human annotators in content analysis to quantify topic and emotion drift. This study presents a systematic computational approach including Topic Modeling <ref type="bibr" target="#b13">[14]</ref>, Network Analysis <ref type="bibr" target="#b14">[15]</ref>, and Hellinger Distance Score <ref type="bibr" target="#b15">[16]</ref> to examine the extent of YouTube's recommendation bias in spreading a disinformation narrative. Unlike other works, this study provides a multilingual analysis of the recommendation algorithm to compare patterns in different cultural contexts. Through this study, we examined the evolution of a YouTube recommendation network on a disinformation narrative (Cheng Ho) as it progresses through the recommendation depths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>This section describes the data collection process and the methodology utilized in this study. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Data Collection</head><p>YouTube API crawler described by Kready et al. <ref type="bibr" target="#b16">[17]</ref> was used to collect over 1000 videos based on the following keyphrases: "'Cheng Ho'/'Zheng He'+laksamana+damai", " 'Sam Po Kong'+Islam+Indonesia", " '1421 Saat China Menemukan Dunia'+'Gavin Menzies'", and "1421 Saat China Menemukan Dunia".</p><p>The data collected was written to a MySQL database, where the video titles were queried with a full-text search of the keyphrases. Furthermore, we extracted the top 50 most viewed videos to limit the results.</p><p>Similarly, we adopted a custom crawler to collect five depersonalized video recommendations on each seed video, producing the 1st depth of recommendations. Furthermore, this process was repeated four more times, with videos from the previous depths used as inputs to generate the next depths. The depersonalization was done to evaluate the raw recommendation algorithm without the influence of the user history. Table <ref type="table" target="#tab_0">1</ref>. highlights the number of videos collected at each depth.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Topic Drift</head><p>By analyzing the topic drift, we aimed to determine if recommendations deviate from the original topic and investigate the content similarity that occurs in YouTube's recommendation system as it progresses through five depths of recommendations. Topic drift occurs when the topics of the recommended videos deviate from the original topics found in the seed videos. To measure topic drift, the titles of unique videos across all depths were concatenated to create a master corpus. The master corpus was subjected to an LDA topic model <ref type="bibr" target="#b13">[14]</ref> to determine the topics present, which revealed 10 topics in the master corpus. Next, a corpus was created for each depth by combining the titles of all the videos in a depth. Furthermore, the corpus for each depth was passed through the LDA topic model to generate their topic probability distribution (likelihood for a corpus to belong to a topic). Finally, to measure content similarity between two depths we applied the Hellinger Distance metric <ref type="bibr" target="#b15">[16]</ref> to their topic probability distribution. The Hellinger distance formula is a mathematical formula used to measure the similarity or dissimilarity between two probability distributions. Given two probability distributions P and Q, the Hellinger distance H(P, Q) is defined as:</p><formula xml:id="formula_0">𝐻 (𝑃, 𝑄) = 1 √2 √ 𝑘 ∑ 𝑖=1 ( √ 𝑝 𝑖 − √ 𝑞 𝑖 ) 2<label>(1)</label></formula><p>The Hellinger distance ranges from 0 (when P and Q are identical) to 1 (when P and Q have no common outcomes). The topic drifts were measured in two ways:</p><p>• Seed-to-depth distance: It quantifies the topic similarity between the seed videos (depth 0) and each depth (e.g depth 0&amp; depth 1; depth 0 &amp; depth 2; depth 0 &amp; depth 3) by measuring the distance between their topic probability distribution. • Inter-depth distance: It quantifies the topic similarity between adjacent depths (e.g. depth 0 &amp; depth 1; depth 1 &amp; depth 2; depth 2 &amp; depth 3) by measuring the distance between their topic probability distribution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Network Analysis</head><p>We performed a network analysis to understand the network structure within the recommendation depths. The aim was to observe how the recommendation network evolves as we progress through the recommendation depths. To analyze the recommendation network, we created a network graph for each depth and analyzed it using Gephi <ref type="bibr" target="#b17">[18]</ref>. The recommendation network consists of videos as vertices and recommendations as directed edges. Next, we used the modularity measure <ref type="bibr" target="#b18">[19]</ref> on Gephi to identify communities within the network, which enabled us to isolate the videos in a community and summarize the videos' characteristics to understand their uniqueness. We utilized the eigenvector centrality value (eigenvalues) to measure the transitive influence of nodes in the network <ref type="bibr" target="#b19">[20]</ref>. The value (0 to 1) of a node is dependent on how well-connected a node is to other well-connected nodes. Applying the eigenvector centrality measure to the network helped identify the influential videos (high eigenvalues) acting as attractors in the recommendation network. These videos with high eigenvalues are favored by the algorithm with a propensity to be recommended. Furthermore, we determined the similarity of the influential videos to the original narrative.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Emotion Analysis</head><p>Next, we performed an emotional analysis to examine the emotions embedded in the recommendations and measure emotion drift (the divergence of emotions from the original narrative). This allowed us to further assess how the Youtube recommendation algorithm considers emotions <ref type="bibr" target="#b12">[13]</ref>. Emotion analysis involves detecting the true emotions behind a text. For effective text analysis, the emotion analysis technique employs natural language processing, information extraction techniques, and a pre-trained language representation model <ref type="bibr" target="#b20">[21]</ref>. Manuel Romero's T5-based-finetuned-emotion model <ref type="bibr" target="#b21">[22]</ref> was used to identify the emotions of video titles, using transfer learning which preserves previous information, thereby improving the speed and accuracy of training. This produced a probability score for six emotions: anger, fear, joy, love, surprise, and sadness. Furthermore, we computed the distribution of emotions for each depth and visualized it on a line graph, allowing us to see the emotion trends as we traverse through the recommendation depths. This process was conducted on the subset of Indonesian and English video titles for each depth, converting the Indonesian video title to English before passing it as input into the emotion model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head><p>This section discusses our findings at each stage of our methodology.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Topic Drift Analysis of Recommended Videos</head><p>The topic drift analysis aims to determine if YouTube's recommendations deviate from the original topic as it progresses through five depths of recommendations. A high Hellinger distance score signifies low similarity between two probability distributions and vice versa. Therefore, topic drift occurs when there is a sequential increase in the Hellinger distance between two topic probability distributions. From the line chart, an upward trend indicates topic drift (increasing Hellinger distance) explaining that there is decreasing similarity. After collecting the recommendations for five depths we calculated the language distribution of videos in each depth. Consequently, we saw that Indonesian and English had the largest percentage, 55.5%, and 34.6% respectively, thereby influencing our decision to perform a comparative analysis of the two languages. The topic drift methodology described above was applied to a subset of these languages across depths. Since our seed videos were all Indonesian, we translated them into English and used them to calculate the topic drift of the English videos</p><p>In Figure <ref type="figure" target="#fig_0">1</ref> (a &amp; b), we measured the topic similarity of the recommended videos between adjacent depths (the inter-depth distance). Looking at the results of the inter-depth distance of English and Indonesian videos in Figure <ref type="figure" target="#fig_0">1</ref>(a &amp; b), we observed a declining Hellinger distance score. From the graphs, the maximum distance occurred between the seed (0) and depth 1, with a constant decline to the end. This explains that as we progress through the recommendation depths the content similarity between two adjacent depths increases. It tells us that the recommendation becomes more similar to each other.</p><p>Furthermore, the seed-to-depth Hellinger distance score for English and Indonesian videos in Figure <ref type="figure" target="#fig_0">1 (c &amp; d</ref>), was used to determine the relevance of the recommendations in each depth to the original narrative. From the graphs, we observed that the distance between the seed and the recommendations increases as we progress through the depths. The observed continuous topic drift explains that the recommended videos were becoming more dissimilar from the seed as we moved through the depths.</p><p>Combining the results of the two topic drift views, suggests that regardless of each depth's increasing dissimilarity to the seed (seed-depth distance), YouTube's recommendation engine strives to keep the content between each adjacent depth similar (inter-depth distance). This explains how users are gradually exposed to videos irrelevant to the seed. We realized a higher magnitude in topic drifts for English videos over Indonesian videos.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Analysis of the Recommendation Network</head><p>Next, we analyzed each depth's network to identify the influential nodes with a higher propensity to be recommended by the algorithm. The videos in each depth were ranked with their eigenvector centrality value, which determines a node's transitive influence in the network. We selected and examined the top 3 most influential videos based on the eigenvalues. By examining the network's influential nodes, we confirm the kinds of videos that affect recommendations and how a video's influence evolves as we go through the depths. We observed how the influential nodes in the network evolve from relevant videos to irrelevant videos about the Cheng Ho topic.</p><p>The topics of the influential nodes drift from the original narrative and a new topic ("Cha Guan") is subtly introduced at depths 2 and 3, with a complete shift at depth 4. "Cha Guan" is a segment on the "Asumi" channel, a media-tech institution aimed at Indonesia's younger demographic, with a focus on current events and pop culture. Notably, the videos on this channel discussed the economy and religious freedom in China. Still observing the evolution of influential videos in the network, we noticed a shift in language at depth 5, from Indonesian to English. In addition to the language change amongst influential videos at depth 5, the distribution of eigenvector scores was significantly lower.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Emotion Drift Analysis</head><p>Analyzing the emotions attached to videos at different recommendation depths and whether they remain similar to the original narrative aids to assess the impact of emotions on YouTube's recommendations. Performing a comparative analysis of the emotion drift on Indonesian and English videos across all depths, allows us to identify patterns that may exist in the different language contexts. However, in the emotion drift charts of Indonesian and English videos shown in Figure <ref type="figure">5</ref>, we see that a similar trend exists between them. Joy is the most prominent emotion exhibited in the video titles of the different languages. This could be due to the likelihood for users to engage more with positive emotions than negative ones <ref type="bibr" target="#b12">[13]</ref> or that it aligns with the disinformation propaganda of the Cheng Ho narrative driven by China. Further analysis of the impact of emotions on video recommendations is conducted in future work by comparing the emotion drifts on competing narratives. The results from the topic drift analysis show high content similarity in the recommendations between depths, but low content similarity with the seeds. Accordingly, the increased similarity between recommendation depths and their divergence from the seed videos is indicative of bias toward a coordinated pool of videos with no relevance to the original topic. Furthermore, analyzing the network with the eigenvector centrality measure revealed influential videos at each depth. Additionally, the emotion analysis performed on the data showed joy as the most prominent emotion, which aligns with emotions attached to the Cheng Ho narrative.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>From our analysis, we find no evidence affirming the contributions of YouTube's recommendations toward spreading disinformation. The results of our topic drift analysis show that as we progressively step through YouTube's recommendations, we get further away from our original narrative. However, we observed an increased similarity between the adjacent recommendation depths. The increased similarity between recommendations and their divergence from the seed videos is suggestive of a bias towards a streamlined pool of videos with no relevance to the original topic. This is evident in the results of our network analysis, where we see how the content of the influential videos switches from our original topic to videos about "Cha Guan". From our comparison of Indonesian and English topic drifts, we observed similar patterns between them, but the English videos had no relevance to our original topic. The emotional analysis conducted on the recommended videos showed joy as the most prominent emotion, this could either be due to the likelihood for users to engage more with positive emotions than negative ones or tie into the theme of the China-driven Cheng Ho narrative. From our findings, we conclude that YouTube's recommendation algorithm does not aid the spread of disinformation, but that as we step through recommendations the relevance of their contents to the original narrative is reduced.</p><p>Future directions for this research include evaluating topic drifts on different levels of video text including video description, transcript, and a concatenation of video title and description. Other directions may involve building a predictive model from the video characteristics (engagement stats, eigenvector score, topic distribution, etc.) at each depth to determine the likelihood of a video being recommended by another video.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Topic Drifts (a)Inter-depth distance, English videos (b)Inter-depth distance, Indonesian videos (c)Seed-depth distance, English videos (d)Seed-depth distance, Indonesian videos.</figDesc><graphic coords="7,97.62,85.26,100.01,77.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Evolution of YouTube's recommendation network.</figDesc><graphic coords="8,116.12,239.48,173.63,147.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Emotion drift analysis of (a) English Videos and (b) Indonesian Videos.</figDesc><graphic coords="9,297.64,84.19,137.51,136.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Recommendation Depth Statistics.</figDesc><table><row><cell>Depth</cell><cell cols="2">Number of Videos Number of Unique Videos</cell></row><row><cell>Seed</cell><cell>50</cell><cell>50</cell></row><row><cell>Depth 1</cell><cell>247</cell><cell>188</cell></row><row><cell>Depth 2</cell><cell>1212</cell><cell>819</cell></row><row><cell>Depth 3</cell><cell>5985</cell><cell>3521</cell></row><row><cell>Depth 4</cell><cell>29586</cell><cell>14755</cell></row><row><cell>Depth 5</cell><cell>145923</cell><cell>47101</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research is funded in part by the U.S. National Science Foundation (OIA-1946391, OIA-1920920, IIS-1636933, ACI-1429160, and IIS-1110868), U.S. Office of the Under Secretary of Defense for Research and Engineering (FA9550-22-1-0332), U.S. Office of Naval Research (N00014-10-1-0091, N00014-14-1-0489, N00014-15-P-1187, N00014-16-1-2016, N00014-16-1-2412, N00014-17-1-2675, N00014-17-1-2605, N68335-19-C-0359, N00014-19-1-2336, N68335-20-C-0540, N00014-21-1-2121, N00014-21-1-2765, N00014-22-1-2318), U.S. Air Force Research Laboratory, U.S. Army Research Office (W911NF-20-1-0262, W911NF-16-1-0189, W911NF-23-1-0011), U.S. Defense Advanced Research Projects Agency (W31P4Q-17-C-0059), Arkansas Research Alliance, the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock, and the Australian Department of Defense Strategic Policy Grants Program (SPGP) (award number: 2020-106-094). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding organizations. The researchers gratefully acknowledge the support.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://edu.gcfglobal.org/en/youtube/what-is-youtube/1/" />
		<title level="m">YouTube: What is YouTube?</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/" />
		<title level="m">Biggest social media platforms 2022</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">P.-J</forename><surname>Allin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Corman</surname></persName>
		</author>
		<ptr target="https://smallwarsjournal.com/jrnl/art/chinas-columbus-was-imperialist-too-contesting-myth-zheng-he" />
		<title level="m">China&apos;s Columbus&quot; Was an Imperialist Too: Contesting the Myth of Zheng He</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<ptr target="https://www.abc.net.au/news/2019-09-22/zheng-he-chinese-islam-explorer-belt-and-road/11471758" />
	</analytic>
	<monogr>
		<title level="m">Fake news of the century&apos;: The Muslim explorer China deploys while persecuting Muslims</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">the maritime silk road</title>
		<author>
			<persName><forename type="first">Zheng</forename><surname>He</surname></persName>
		</author>
		<ptr target="https://u.osu.edu/mclc/2015/10/02/zheng-he-and-the-maritime-silk-road/" />
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The Zheng He Voyages: A Reassessment</title>
		<author>
			<persName><forename type="first">G</forename><surname>Wade</surname></persName>
		</author>
		<idno type="DOI">10.2307/41493537</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of the Malaysian Branch of the Royal Asiatic Society</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="37" to="58" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<ptr target="https://www.cfr.org/backgrounder/china-xinjiang-uyghurs-muslims-repression-genocide-human-rights" />
		<title level="m">China&apos;s Repression of Uyghurs in Xinjiang</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://www.apa.org/topics/journalism-facts/misinformation-disinformation" />
		<title level="m">Misinformation and disinformation</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">protection of the right to freedom of opinion and expression | Association for Progressive Communications</title>
		<author>
			<persName><surname>Apc</surname></persName>
		</author>
		<ptr target="http://bit.ly/3FPHghD" />
	</analytic>
	<monogr>
		<title level="m">Disinformation and freedom of expression: Submission in response to the call by the UN Special Rapporteur on the promotion and</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Ric Neo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jason</forename><forename type="middle">Dc</forename><surname>Yin</surname></persName>
		</author>
		<ptr target="https://www.apc.org/en/pubs/social-discipline-and-control-impact-fake-news-and-disinformation-minorities-indonesia" />
		<title level="m">Of social discipline and control: The impact of fake news and disinformation on minorities in Indonesia | Association for Progressive Communications</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bisbee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bonneau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nagler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Tucker</surname></persName>
		</author>
		<idno type="DOI">10.2139/ssrn.4114905</idno>
		<ptr target="https://papers.ssrn.com/abstract=4114905.doi:10.2139/ssrn.4114905" />
		<title level="m">Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Examining Video Recommendation Bias on YouTube</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kirdemir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kready</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hussain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-78818-6_10</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="106" to="116" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Auditing the Biases Enacted by YouTube for Political Topics in Germany</title>
		<author>
			<persName><forename type="first">H</forename><surname>Heuer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hoch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Breiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Theocharis</surname></persName>
		</author>
		<idno type="DOI">10.1145/3473856.3473864</idno>
		<idno type="arXiv">arXiv:2107.09922[cs</idno>
		<ptr target="http://arxiv.org/abs/2107.09922.doi:10.1145/3473856.3473864" />
	</analytic>
	<monogr>
		<title level="j">Mensch und Computer</title>
		<imprint>
			<biblScope unit="page" from="456" to="468" />
			<date type="published" when="2021">2021. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<ptr target="https://datascienceplus.com/topic-modeling-and-latent-dirichlet-allocation-lda/" />
		<title level="m">Topic Modeling and Latent Dirichlet Allocation (LDA) | DataScience+</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<ptr target="https://www.sciencedirect.com/topics/social-sciences/network-analysis" />
		<title level="m">Network Analysis -an overview | ScienceDirect Topics</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Hellinger Distance and Non-informative Priors</title>
		<author>
			<persName><forename type="first">A</forename><surname>Shemyakin</surname></persName>
		</author>
		<idno type="DOI">10.1214/14-BA881</idno>
		<idno>doi</idno>
		<ptr target=":10.1214/14-BA881" />
	</analytic>
	<monogr>
		<title level="j">Bayesian Analysis</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="923" to="938" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note>International Society for Bayesian Analysis</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">YouTube Data Collection Using Parallel Processing</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kready</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Shimray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Hussain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1109/IPDPSW50202.2020.00185</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1119" to="1122" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Learn how to use Gephi</title>
		<ptr target="https://gephi.org/users/" />
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Data Story: Gephi -Clustering layout by modularity</title>
		<ptr target="https://parklize.blogspot.com/2014/12/gephi-clustering-layout-by-modularity.html" />
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<ptr target="https://neo4j.com/docs/graph-data-science/2.2/algorithms/eigenvector-centrality/" />
		<title level="m">Eigenvector Centrality -Neo4j Graph Data Science</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">An efficient way of text-based emotion analysis from social media using LRA-DNN</title>
		<author>
			<persName><forename type="first">N</forename><surname>Shelke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chaudhury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chakrabarti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Bangare</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Yogapriya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Pandey</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neuri.2022.100048</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/S2772528622000103.doi:10.1016/j.neuri.2022.100048" />
	</analytic>
	<monogr>
		<title level="j">Neuroscience Informatics</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">100048</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<ptr target="https://huggingface.co/mrm8488/t5-base-finetuned-emotion" />
		<title level="m">-base-finetuned-emotion • Hugging Face</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
