<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Toxicity and Networks of COVID-19 Discourse Communities: A Tale of Two Social Media Platforms</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Karen</forename><surname>Dicicco</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Arkansas at Little Rock</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<region>AR</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nahiyan</forename><forename type="middle">B</forename><surname>Noor</surname></persName>
							<email>nbnoor@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Arkansas at Little Rock</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<region>AR</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Niloofar</forename><surname>Yousefi</surname></persName>
							<email>nyousefi@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Arkansas at Little Rock</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<region>AR</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maryam</forename><surname>Maleki</surname></persName>
							<email>mmaleki@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Arkansas at Little Rock</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<region>AR</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Billy</forename><surname>Spann</surname></persName>
							<email>bxspann@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Arkansas at Little Rock</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<region>AR</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nitin</forename><surname>Agarwal</surname></persName>
							<email>nxagarwal@ualr.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Arkansas at Little Rock</orgName>
								<address>
									<settlement>Little Rock</settlement>
									<region>AR</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Toxicity and Networks of COVID-19 Discourse Communities: A Tale of Two Social Media Platforms</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">EC3073B11EE2F69DBD7F2EE3C37C2082</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-06-19T14:17+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Toxicity analysis</term>
					<term>social network analysis</term>
					<term>COVID-19</term>
					<term>Parler</term>
					<term>Twitter</term>
					<term>hate speech</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The issue of hateful or toxic content on social media platforms such as Twitter and Parler is on the rise and demands attention. The aim of this research is to compare and analyze toxicity between Twitter and Parler for COVID-19 discourse. Highly toxic individuals and their networks are analyzed for the two platforms. Data from January 1, 2020, to December 31, 2020, is analyzed to ascertain and compare overall network health and the evolution of toxicity over time. We found evidence that Twitter contained a higher level of toxicity regarding COVID-19 discourse than Parler. When analyzing COVID-19 vaccine discussions within the Twitter network, prominent conspiracy theory themes emerged among highly toxic users. Within the Parler COVID-19 vaccine discussion, we identified clusters of highly toxic users and important bridges aiding the spread of misinformation. These toxic conversations could impact the public health response to various non-pharmaceutical interventions (NPI's). The research demonstrates a computational method to evaluate toxicity and means for policymakers to improve the overall health of our online discourse by stemming the flow of toxicity in communities through online social networks.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>While major social media platforms like Facebook, Twitter, and YouTube have implemented guidelines and enforcement measures to manage toxic content and misinformation, "free speech" platforms such as Parler have been more lenient in permitting hate speech and conspiracy theories. And potentially harmful misinformation. One study showed % of active users in Parler post hateful content <ref type="bibr" target="#b0">[1]</ref>. Parler is a micro-blogging platform that is comparable to Twitter that, by design, lacks the content moderation rules and capabilities of the platform it emulates. Parler was created prior to the emergence of COVID-19, but it has since become an important vector for online misinformation, a place where users are able to spread COVID-19 misinformation without restrictions.</p><p>The spread of abusive language and toxic content on social media can have negative impacts on communities. Analyzing toxic content provides additional insights and helps address the challenge of managing safety on social platforms. Our analysis contributes to the current research on the health of social media. For this paper, we consider misinformation to be a claim that contradicts or distorts the common understanding of verifiable facts <ref type="bibr" target="#b1">[2]</ref>.</p><p>In 2020, Parler, which was previously relatively unknown, experienced a sudden rise in prominence due to the efforts of conservative media personalities and politicians to distance themselves from bigger and more established social media platforms. This move was prompted by a perceived bias and censorship against conservative perspectives on these platforms. As the COVID-19 pandemic disseminated globally in 2020, both Twitter users and the predominantly farright user community of Parler participated in conversations and shared material concerning the endeavors to vaccinate against the disease. This study undertakes a comparative assessment of the toxicity levels of COVID-19-related contents on Twitter and Parler during the duration of January 1, 2020, to December 31, 2020. We analyzed user posts for each platform and compared the evolution of toxicity levels over time. We present evidence that Twitter contained a higher level of toxicity for COVID-19 discourse than Parler over three of the four COVID-19-related content datasets we analyzed. Using the segments of the corpus that contained toxicity, we created co-hashtag graph networks for both platforms to analyze the context of the additional hashtags users were disseminating. This provided additional insight into the COVID-19 vaccine discussion within the Twitter network, which included prominent conspiracy theory themes. These included Bill Gates and the QAnon far-right conspiracy group. The graph network for Parler contained defined user communities, a misinformation echo chamber, and important bridge nodes that served to spread information throughout the rest of the network. This included a bridge node that connected an identified QAnon group to a pro-Trump group.</p><p>This work answers two research questions: 1) Does Twitter and Parler differ in terms of Toxicity on each platform? (Section 5.1) 2) Was toxic content on the COVID-19 vaccine narrative used to spread toxicity to other narratives? (Section 5.</p><p>2) The remainder of this paper is organized as follows. In section 2, the related work that has been published regarding toxicity on social media is presented. Section 3 describes the data collection process and the methodology used in this paper. Section 4 presents the highlights from our results and analysis. Finally, Section 5 concludes with the contributions of this work and presents our plans and ideas for future work.</p><p>The key findings and contributions of this research are:</p><p>• Twitter contained statistically significant (p-value &lt; 0.05) higher level of toxicity compared to Parler regarding COVID-19 discourse. (Section 5.1)</p><p>• Prominent conspiracy theory themes were identified within the Twitter network originating from the COVID-19 vaccine narrative, such as those regarding Bill Gates and the QAnon group. (Section 5.2)</p><p>• Well defined user communities with highly toxic content were identified, including a misinformation echo chamber within the Parler network. (Section 5.2)</p><p>• Significant bridge nodes were identified that spread toxic COVID-19 vaccine misinformation throughout the Parler network. (Section 5.2)</p><p>In the next section we present research from previous studies, background about existing approaches to detecting toxicity, and previous research discussing the impact of toxicity on public health discourse.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">RELATED WORKS</head><p>Several researchers have attempted to characterize the behavior of toxic users, while also attempting to predict their future behavior. Cheng et al. investigated the long-term patterns of users displaying anti-social behavior in online forums and whether anti-social users can be identified early in their posting history using text quality metrics <ref type="bibr" target="#b2">[3]</ref>. Guberman et al. developed a scale for assessing online aggression and applied it to a random sample of Twitter data <ref type="bibr" target="#b3">[4]</ref>. Garimella et al. attempted to develop a technique for quantifying online discussions that cause controversy. The authors emphasized the importance of identifying these topics to understand the formation of echo chambers. They found that trolling behavior for a user decreases with the amount of time between posts, suggesting that negative behavior could have been minimized by instituting a calming period where users are unable to post comments <ref type="bibr" target="#b4">[5]</ref>. Amrollahi <ref type="bibr" target="#b22">(2021)</ref> discusses how users' increasing reliance on social media as a source of information can lead to filter bubbles, which can lead to polarization in society <ref type="bibr" target="#b5">[6]</ref>.</p><p>Pascual-Ferrá et al. claim that social media has an important effect on strengthening public health issues. They focused on online conversations regarding COVID-19 and wearing masks to understand toxicity's role in this discourse <ref type="bibr" target="#b6">[7]</ref>. Majó-Vázquez et al. investigated the number of toxic conversations and patterns they follow on social media during the COVID-19 pandemic and the health of online discussions in social media <ref type="bibr" target="#b7">[8]</ref>. In a similar study, Xue et al., by analyzing tweets shared on Twitter regarding COVID-19, investigated the discourses, sentiments, and concerns on social media <ref type="bibr" target="#b8">[9]</ref>.</p><p>Researchers have developed multiple methods and models to detect toxicity in online text. Watanabe et al. proposed a machine-learning method to detect hate speech on Twitter using sentiment and semantic-based features <ref type="bibr" target="#b9">[10]</ref>.</p><p>Gunasekara and Nejadgholi trained a multi-label classifier to detect toxicity in online conversational text, concluding that character-level text representation techniques were superior in performance than word-level representations <ref type="bibr" target="#b10">[11]</ref>. A few studies have assessed the performance and generalizability of available toxicity detection models. Hanu, L. developed the Detoxify model which is a trained model designed to predict toxic contents. This model is capable of detecting various types of toxicity such as threats, obscenity, insults, and identity hate. The output indicates different scores for each category, based on the score the content will be labeled as toxic or not <ref type="bibr" target="#b11">[12]</ref>. By using this method, Noor et al. detected toxicity score and different types of it. They compared the level of toxicity in three different social media platforms (Twitter, Parler and Reddit) in discussions related to COVID-19 <ref type="bibr" target="#b12">[13]</ref>.</p><p>In a study by Obadimu et al. a Non-negative Matrix Factorization (NMF) technique as an optimization problem was used to forecast commenter toxicity on YouTube. Their findings showed that the NMF model performed more accurately than other models in forecasting toxicity scores and had better computation time <ref type="bibr" target="#b13">[14]</ref>. In another study, Obadimu et al. developed an epidemiological model to evaluate the spread of toxicity on YouTube. They used an STRS (Susceptible, Toxic, Recovered, Susceptible) model to show the similarity between the propagation of toxicity on YouTube and the spread of a disease in a population <ref type="bibr" target="#b14">[15]</ref>. Several researchers have analyzed online toxicity from a case study perspective. Qayyum et al. analyzed the patterns of political discourse in Pakistan and India, finding that toxicity is prevalent from all sources studied <ref type="bibr" target="#b15">[16]</ref>. In another study, Obadimu et al. evaluated five different forms of toxicity between the comments posted on pro-and anti-NATO channels on YouTube. Their analysis demonstrated that comments on pro-NATO channels are less toxic than those on anti-NATO channels <ref type="bibr" target="#b16">[17]</ref>. Obadimu et al. analyzed toxic ideas related to COVID-19 and users who spread them on YouTube. They used social network analysis to find the influential and the top users in the network. They also applied toxicity analysis to evaluate the health of the network <ref type="bibr" target="#b17">[18]</ref>. Pascual-Ferrá et al. evaluated the role of toxicity on Twitter regarding wearing face masks during the COVID-19 pandemic. Their results showed that tweets that used pro-mask hashtags were significantly less likely to use toxic comments while those with anti-mask hashtags were somewhat more toxic <ref type="bibr" target="#b18">[19]</ref>. Chandrasekara et al. discussed the concept of social influence on social networks, stressing that, although there are multiple constructs involved in the social influence process, an important boundary condition involves "the direct vs. indirect peer influence" wherein influence can arise both from a user's immediate neighbor nodes (direct), but also from the common neighbors of their peers (indirect or bridge nodes) <ref type="bibr" target="#b19">[20]</ref>. Trinkle et al. discuss how actions (sanctions) taken against actors who engage in deviant behaviors affect deterrence. Although the authors' case study involves a real-world social network in the form of employees, their results can be applied to online social networks, arming platform administrators with effective knowledge to formulate strategies for neutralization <ref type="bibr" target="#b20">[21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Data Collection</head><p>The data from both Twitter and Parler analyzed in this work consists of a corpus of user posts collected based on a list of seed hashtags related to COVID-19 from January 1, 2020, through December 31, 2020 (Table <ref type="table" target="#tab_0">1</ref>)  <ref type="bibr" target="#b21">[22]</ref>. The Twitter Developer API was used to collect data from Twitter for the hashtags in (Table <ref type="table" target="#tab_0">1</ref>) post-hoc. Because of this, tweets and accounts removed from Twitter for being labeled misinformation were not collected. Data collected in the study will be made available upon request according to data sharing guidelines of Twitter and Parler. In the next section, we discuss our methodology and approach to calculating toxicity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Methodology</head><p>Prior to executing the toxicity analysis, we removed our seed keywords and hashtags from each record in the datasets so their presence would not influence the calculated toxicity score. Non-English posts for Parler and Twitter were removed as more than 90% of our data are in English and Detoxify model can generate toxicity score effectively for English text.</p><p>We computed toxicity scores for each Parler post and Twitter tweet in the dataset using Detoxify. Detoxify is a model created by Unitary AI (https://github.com/unitaryai/detoxify) that uses a Convolutional Neural Network that is trained with word vector inputs to determine whether the text could be perceived as "toxic" to a discussion. Given a text input, the Detoxify API returns a probability score between 0 and 1, with higher values indicating a greater likelihood of the toxicity label being applied to the text. Since toxicity scores are based on a probability score of 0 to 1, toxicity scores of 0.5 or greater indicate a piece of text labeled as "toxic". Detoxify returns seven categories of toxicity scores in terms of level and type: 1) toxicity, which is the overall level of toxicity for a piece of text, 2) severe toxicity, 3) obscene, 4) threat, 5) insult, 6) identity attack and 7) sexually explicit. The reason for using Detoxify is it is an open-source comment detection python library that identifies harmful and inappropriate texts online. This is a multilingual model that has been trained in English, French, Italian, Spanish, Russian, Turkish, and Portuguese. Although it can predict toxicity by providing a score, it is not effective while there are some words related to swearing, insults, or profanity present in the text. They may predict a non-toxic text as toxic if there are certain words However, this inefficiency level is very low, and we can ignore this as it is same for both platforms.</p><p>For comparison, we also explored using Google's Perspective API, which is a related type of model with similar outputs used for determining toxicity and the scores were similar.</p><p>For our co-hashtag social network analysis, we used NetworkX (https://networkx.org/), a Python library for creating and analyzing network graphs, to generate co-hashtag graph networks for the Twitter and Parler vaccine category datasets. We used the Girvan-Newman algorithm to identify distinct communities within each network <ref type="bibr">[23]</ref>. We removed low toxic posts less than 0.5 to focus on the analysis of highly toxic content <ref type="bibr" target="#b17">[18]</ref>. The next section discusses our analysis and results of these Twitter and Parler datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Analysis and Results</head><p>In this section, we discuss our analysis and results. First, we discuss the overall posting frequency of our seed hashtags (and keywords) and the results of our toxicity analysis. This is followed by a discussion of our social network analysis using co-hashtag graph networks and present visualizations of some of the most interesting highlights from our findings within each discussion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Toxicity Analysis</head><p>The analysis was conducted using Twitter data, and the hashtags used as seeds were initially seen in March 2020.</p><p>From March to December 2020, the COVID category had the highest number of posts compared to other Twitter datasets. The number of tweets showed a peak in mid-April, near the end of June, and a significant increase in mid-November (Figure <ref type="figure" target="#fig_0">1</ref>). It wasn't until the end of May that posting activity using the seed keywords was observed on Parler, as shown in Figure <ref type="figure" target="#fig_0">1</ref> (right). In November, there was a significant increase in posting frequency across all Parler datasets, which initially appeared to be indicative of artificial behavior. However, upon further inspection of the dataset, it was discovered that Parler users tended to use all four seed hashtags within a single post, unlike Twitter users. Our analysis revealed that there were differences in the presence of toxicity (toxicity score &gt; 0.5) in user-generated text content between Twitter and Parler from January 1, 2020, to December 31, 2020.</p><p>When comparing the amounts of harmful content on each site, Twitter had a greater proportion of toxic tweets overall (Figure <ref type="figure" target="#fig_1">2</ref>). This means that the majority of the Twitter content had a higher probability of being labeled as toxic than compared to the Parler content. Surprisingly, in the overall toxicity category, the Twitter content for all datasets had a higher percentage of content with toxicity scores greater than 0.7 and greater than 0.9 than did the Parler content (see Table <ref type="table" target="#tab_2">2</ref>). Again, Parler only exceeded Twitter with regard to the percentage of toxic content for the COVID dataset. This is an interesting result because we expected to see more toxic content on Parler due to the free-speech nature of the platform and how they tout claim their lack of censorship as a selling point for users. We also looked at the "obscene" and "insult" toxicity categories for each tweet and post for all datasets. Of the seven categories of toxicity scores obtained from Detoxify, only three contained enough data to warrant inclusion in the discussion: toxicity (overall), obscene, and insult. More Twitter content fell into the obscene category than Parler content for all datasets, with the highest percentage being within the Lockdown dataset (28.6% versus 13.06%). However, more Parler content fell into the insult category than Twitter content for the COVID dataset (18.01% versus 10.93%). The percentage of toxic content (overall toxicity category) within the Vaccine datasets varied considerably between platforms (30.98% for Twitter versus 11.93% for Parler).</p><p>Overall, the toxicity analysis revealed that Twitter was more toxic than Parler in all but one case, the COVID dataset.</p><p>The toxic content was more obscene and insulting for both platforms. But the toxic content on Twitter was obscener than that of Parler, especially within the Lockdown dataset. The toxic content on Parler was more of insulting type within the COVID dataset.</p><p>Next, we wanted to test the statistical validity of our findings. We conducted statistical significance testing between Twitter and Parler, using a t-test for the 4 datasets. The null hypothesis of the t-test is that the means of two groups are the same. P values for these t-tests are shown in Table <ref type="table" target="#tab_3">3</ref>. The p values for all these tests are significantly lower than 0.05 which implies the null hypothesis can be rejected for all four pairs. So, the alternative hypothesis is accepted which is there are significant differences between the mean toxicity scores for Twitter and Parler for all four datasets.</p><p>Table <ref type="table" target="#tab_2">2</ref> reveals that there are notable differences in the mean and median toxicity values for the different platforms and contexts on the two platforms, which is a peculiar characteristic of this data. This reveals that the distributions of toxicity for these datasets are not uniform and are instead highly skewed. This shows that there are a lot of observations with very low toxicity and a few with extremely high toxicity, which is driving up the mean but is not having an influence on the median.   The Twitter data, for example, shows that there are a few conversations that are very toxic, and those few highly toxic conversations are driving up the overall toxicity level of the platform. This has important implications for platform administrators, who may be able to significantly reduce the strongest drivers of toxicity by moderating the relatively few, highly toxic users, rather than attempting larger platform-wide changes to all users. Table <ref type="table" target="#tab_3">3</ref> also gives a good summary comparison and contrast between the two platforms in terms of the context of discussions, especially in those of vaccine and lockdown, for which Twitter is clearly more toxic. The toxicity standard deviation metrics revealed some additional unique contrasts between the two platforms (Table <ref type="table" target="#tab_3">3</ref>). The standard deviation of toxicity values for content within the lockdown, mask, and vaccine categories are higher on Twitter than on Parler, indicating that there is more variation in toxicity for these datasets, although values were higher for Parler for content within the COVID category. Since the percentage of toxic content in the vaccine datasets contrasted considerably between Twitter and Parler, we next drill down into those datasets and look at their network structures to identify some possible explanations for this drastic difference.</p><p>The 5-point statistical analysis in Figures <ref type="figure">3 to 6</ref>, shows that in general, the term "f*ckcovid" on Parler is more toxic than on Twitter. However, for the other negative terms related to COVID, Twitter is more toxic. We use the top 3 most severe classes to compare in our statistical analysis. For 'f*cklockdown' and 'f*ckvaccine' hashtags, Parler is less toxic than Twitter if we consider the mean toxicity of from the boxplot for both platforms. On the other hand, for 'f*ckcovid' and 'f*ckmask' hashtags there is a significant increase in toxicity in Parler. On Twitter, the most toxic hashtag category is 'f*ckmask' whereas for Parler it is 'f*ckcovid'. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Social Network Analysis</head><p>Conducting social network analysis allows us to identify some important characteristics of the users within these datasets for each platform. Using NetworkX, we created co-hashtag network graphs to compare the Twitter and Parler Vaccine datasets filtered down to focus on the tweets and posts that scored greater than 0.5 on toxicity. This allowed us to identify the user communities, look at the context of the hashtags, and see what other topics and information toxic users shared and actively associated with the Vaccine hashtags/keywords. Our results show that the overall structure of the Twitter and the Parler co-hashtag networks vary considerably, as do the structure of their internal components. At the highest level, the Twitter co-hashtag network appears to be unstructured and somewhat scattered out with a few small clusters of users (Figure <ref type="figure">7</ref>, left); whereas the Parler co-hashtag network is clustered and shows clear connective bridges and communities of users (Figure <ref type="figure">7</ref>, right). Upon drilling down into these co-hashtag networks, we can identify some of the contexts within these clusters of users. When analyzing the Twitter network, we identified a mass of users sharing hashtags indicative of various Bill Gates conspiracies, anti-vaccination ideas, and the far-right conspiracy group QAnon (Figure <ref type="figure" target="#fig_3">8</ref>). QAnon is a movement that has followers that spread false information on a variety of topics, including COVID-19 <ref type="bibr" target="#b23">[24]</ref>. On Parler, QAnon followers often use the #wwg1wga hashtag in posts. The hashtag #wwg1wga is an abbreviation for the phrase, "where we go one, we go all" <ref type="bibr" target="#b24">[25]</ref>. This hashtag was identified as being a clear bridge node in the Parler network, meaning that it is an important connector node and conduit by which other information flows within and throughout the network (Figure <ref type="figure" target="#fig_4">9</ref>).  This echo chamber is completely separated from the rest of the network components. Instead of the users within it exchanging new information with one another, they are repeatedly sharing the same ideas, such as #plandemic, #masksmakeyousick, #vaccineskill, and #coronavirushoax. The #plandemic hashtag was used after the "Plandemic" movie was posted online, which spread numerous conspiracy theories about COVID-19 <ref type="bibr" target="#b25">[26]</ref>. Overall, the Twitter network showed prominent conspiracy themes such as those regarding Bill Gates and QAnon. In contrast, the Parler network showed defined user community clusters, the most concerning consisting of a misinformation echo chamber. Also, within the COVID-19 vaccine discussion within the Parler network, important bridge nodes were identified that serve to spread information throughout the rest of the network. One interesting observation was the bridge node that connected the QAnon group connected with an identified pro-Trump group. In the next section, we discuss our conclusions and ideas for future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions and Future Work</head><p>In the specified timeframe, both Twitter and Parler exhibited considerable degrees of toxicity concerning COVID-19 topics. This study examines and contrasts the degree of toxicity and network patterns of these toxic communities on both platforms. The methods were applied to different datasets for Twitter and Parler. The results obtained from this research are preliminary, and they are restricted to a particular data arrangement. The findings indicate toxicity levels were higher overall on Twitter for all datasets except for the COVID category. It was unexpected to observe higher toxicity levels on Twitter since it is stringent content guidelines and moderation policies. Conversely, Parler's guidelines highlight a lack of moderation. One possible explanation for the unexpectedly high toxicity on the Parler COVID dataset is that Twitter began removing users and posts sharing COVID-19 misinformation in April 2020, sparking anger and prompting many users to migrate to Parler instead <ref type="bibr" target="#b26">[27]</ref>. In addition to being detrimental to the overall health of social networks, the moderate proportion of toxic content on these platforms surrounding COVID-19 topics may affect users' perceptions of the effectiveness and importance of periodic lockdowns, wearing face masks, and becoming vaccinated. The contributions of this work include 1) Evidence that Twitter contained a higher level of toxicity regarding COVID-19 discourse than Parler; 2) When analyzing COVID-19 vaccine discussion within the Twitter network, prominent conspiracy theory themes were identified, such as those regarding Bill Gates and the QAnon group; 3) For COVID-19 vaccine discussions within the Parler network, defined clusters of users were identified, including a misinformation echo chamber; 4) In Parler, important bridge nodes were identified that spread COVID-19 vaccine misinformation throughout the rest of the network. Of specific interest was a bridge node that connected the QAnon group with an identified pro-Trump group.</p><p>The approach employed to gather and scrutinize data via the chosen seed hashtags is a possible constraint of this paper. The classification model used in this paper encounters challenges in distinguishing the intended meaning of profanity in the semantic context. Consequently, it frequently categorizes obscene language as toxic, regardless of the user's intent. In our future research, we will be mindful of this limitation and account for it accordingly. In future work, we plan to create and compare Twitter and Parler mentions, shared URL, and retweet/echo networks. These additional analyses will improve our ability to identify misinformation/conspiracy theories and to identify the users and communities that spread them. The other concerns that we can pursue for future work are users who are suspicious of being a bot. By using Botometer, we are able to detect the users who have the highest probability of being a bot. So by eliminating them we can build our network again, calculate the toxicity and expand our analysis. We will also further explore the vaccine and lockdown topics due to their exhibiting notably higher toxicity. We will also expand the analysis with the addition of topic modeling and models for the diffusion of information on OSNs. These efforts provide an important perspective on the effects of the differences between platform moderation and are the first step in a cross-platform analysis of toxicity with implications for public health and public trust.</p><p>opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding organizations. The researchers gratefully acknowledge the support.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Twitter (left) and Parler (right) posts by weekly publication showing posting frequency.</figDesc><graphic coords="5,311.40,297.20,240.00,163.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Platform Comparison of Toxicity Means, Medians and Standard Deviations</figDesc><graphic coords="7,126.50,527.21,358.60,157.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .Figure 4 .Figure 5 .Figure 6 .Figure 7 .</head><label>34567</label><figDesc>Figure 3. f*ckcovid Hashtag for three classes (Toxicity, Obscene, Insult) for Twitter(left) vs Parler (right)</figDesc><graphic coords="8,57.88,257.37,246.69,136.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 8 .</head><label>8</label><figDesc>Figure 8. Twitter co-hashtag network graph identifying a mass of users sharing hashtags indicative of various Bill Gates conspiracies, anti-vaccination ideas, and the far-right conspiracy group QAnon.The #wwg1wga bridge node can be seen connecting election conspiracy theorists (Figure10, right), who was actively associating with the #f*ckvaccine hashtag with other hashtags indicative of the 2020 U.S. presidential election such as #maga, #trump2020, and #stopthesteal, which is related to the misinformation narrative regarding wide-spread election fraud. The other bridge nodes identified in the Parler network were #sheep and #fightback.Our drill-down also allowed us to identify a misinformation echo chamber operating within this Parler co-hashtag network (Figure10).</figDesc><graphic coords="9,130.50,385.41,350.97,200.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 9 .</head><label>9</label><figDesc>Figure 9. Parler co-hashtag network graph component identifying important bridge nodes.</figDesc><graphic coords="10,127.50,97.99,357.00,182.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 10 .</head><label>10</label><figDesc>Figure 10. Parler co-hashtag network graph component identifying a misinformation echo chamber.</figDesc><graphic coords="10,168.00,442.63,275.94,185.50" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 -</head><label>1</label><figDesc>Hashtags used for Twitter data collection and Parler dataset filtration. Eight datasets were created, four for each platform with corresponding hashtags and keywords. For Parler data, we used an open dataset created by Aliapoulios et al. which was a complete dataset of all Parler data from August 2018 to when Parler was shut down in January 2021</figDesc><table><row><cell>Category</cell><cell>Hashtags/Keywords</cell><cell>Records</cell></row><row><cell>COVID</cell><cell>#f*ckyourcovid, f*ckthecovid, #f*ckcovid</cell><cell>44,492</cell></row><row><cell>Lockdown</cell><cell>#f*ckyourlockdown/s, #f*ckthelockdown/s, #f*cklockdown/s</cell><cell>7,437</cell></row><row><cell>Mask</cell><cell>#f*ckyourmask/s, #f*ckthemask/s, #f*ckmask/s</cell><cell>28,588</cell></row><row><cell>Vaccine</cell><cell>#f*ckyourvaccine/s, #f*ckthevaccine/s, #f*ckvaccine/s</cell><cell>6,538</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 .</head><label>2</label><figDesc>Number and percentage of toxic posts on Twitter and Parler for all eight datasets.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 -</head><label>3</label><figDesc>Statistical Analysis of eight datasets</figDesc><table><row><cell>Pairwise Comparison</cell><cell>p-value &lt;0.05</cell><cell>Platform</cell><cell>Records</cell><cell>Mean</cell><cell>SD</cell></row><row><cell></cell><cell></cell><cell>Twitter</cell><cell>28,131</cell><cell>0.234</cell><cell>0.388</cell></row><row><cell>Twitter COVID dataset -Parler COVID dataset</cell><cell>2.81e-55</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>Parler</cell><cell>16,361</cell><cell>0.294</cell><cell>0.402</cell></row><row><cell>Twitter Lockdown dataset -Parler Lockdown dataset</cell><cell>1.87e-43</cell><cell>Twitter Parler</cell><cell>1,472 5,965</cell><cell>0.326 0.176</cell><cell>0.406 0.361</cell></row><row><cell></cell><cell></cell><cell>Twitter</cell><cell>2,423</cell><cell>0.313</cell><cell>0.416</cell></row><row><cell>Twitter Mask dataset -Parler Mask dataset</cell><cell>5.70e-09</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>Parler</cell><cell>26,165</cell><cell>0.264</cell><cell>0.388</cell></row><row><cell></cell><cell></cell><cell>Twitter</cell><cell>610</cell><cell>0.302</cell><cell>0.411</cell></row><row><cell>Twitter Vaccine dataset -Parler Vaccine dataset</cell><cell>5.05e-42</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell>Parler</cell><cell>5,928</cell><cell>0.119</cell><cell>0.304</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Acknowledgments</head><p>This research is funded in part by the U.S. National Science Foundation (OIA-1946391, OIA-1920920, IIS-1636933, ACI-1429160, and IIS-1110868), U.S. Office of the Under Secretary of Defense for Research and Engineering (FA9550-22-1-0332), U.S. Office of Naval Research (N00014-10-1-0091, N00014-14-1-0489, N00014-15-P-1187, N00014-16-1-2016, N00014-16-1-2412, N00014-17-1-2675, N00014-17-1-2605, N68335-19-C-0359, N00014-19-1-2336, N68335-20-C-0540, N00014-21-1-2121, N00014-21-1-2765, N00014-22-1-2318), U.S. Air Force Research Laboratory, U.S. Army Research Office (W911NF-20-1-0262, W911NF-16-1-0189, W911NF-23-1-0011), U.S. Defense Advanced Research Projects Agency (W31P4Q-17-C-0059), Arkansas Research Alliance, the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock, and the Australian Department of Defense Strategic Policy Grants Program (SPGP) (award number: 2020-106-094). Any</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Free speech or Free Hate Speech? Analyzing the Proliferation of Hate Speech in Parler</title>
		<author>
			<persName><forename type="first">A</forename><surname>Israeli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Tsur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)</title>
				<meeting>the Sixth Workshop on Online Abuse and Harms (WOAH)</meeting>
		<imprint>
			<date type="published" when="2022-07">2022. July</date>
			<biblScope unit="page" from="109" to="121" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Misinformation, Disinformation, and Online Propaganda</title>
		<author>
			<persName><forename type="first">A</forename><surname>Guess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lyons</surname></persName>
		</author>
		<idno type="DOI">10.1017/9781108890960.003</idno>
		<ptr target="https://doi.org/10.1017/9781108890960.003" />
	</analytic>
	<monogr>
		<title level="j">Social Media and Democracy</title>
		<imprint>
			<biblScope unit="page" from="10" to="33" />
			<date type="published" when="2020-09">2020. September</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Antisocial behavior in online discussion communities</title>
		<author>
			<persName><forename type="first">J</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Danescu-Niculescu-Mizil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Leskovec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 9th International Conference on Web and Social Media</title>
				<meeting>the 9th International Conference on Web and Social Media</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="61" to="70" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">First steps in quantifying toxicity and verbal violence on Twitter</title>
		<author>
			<persName><forename type="first">J</forename><surname>Guberman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schmitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hemphill</surname></persName>
		</author>
		<idno type="DOI">10.1145/2818052.2869107</idno>
		<ptr target="https://doi.org/10.1145/2818052.2869107" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM Conference on Computer Supported Cooperative Work</title>
				<meeting>the ACM Conference on Computer Supported Cooperative Work</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="277" to="280" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Reducing Controversy by Connecting Opposing Views</title>
		<author>
			<persName><forename type="first">K</forename><surname>Garimella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De Francisci Morales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gionis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mathioudakis</surname></persName>
		</author>
		<idno type="DOI">10.1145/3018661.3018703</idno>
		<ptr target="https://doi.org/10.1145/3018661.3018703" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Tenth ACM International Conference on Web Search and Data Mining</title>
				<meeting>the Tenth ACM International Conference on Web Search and Data Mining<address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Association Computing Machinery</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="81" to="90" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A conceptual tool to eliminate filter bubbles in social networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Amrollahi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Australasian Journal of Information Systems</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Toxicity and verbal aggression on social media: Polarized discourse on wearing face masks during the COVID-19 pandemic</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pascual-Ferrá</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Alperstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Barnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">N</forename><surname>Rimal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Big Data &amp; Society</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">20539517211023533</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Volume and patterns of toxicity in social media conversations during the COVID-19 pandemic</title>
		<author>
			<persName><forename type="first">S</forename><surname>Majó-Vázquez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Nielsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Verdú</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>De Domenico</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Papaspiliopoulos</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Twitter discussions and emotions about the COVID-19 pandemic: Machine learning approach</title>
		<author>
			<persName><forename type="first">J</forename><surname>Xue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of medical Internet research</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page">e20550</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Hate Speech on Twitter: A Pragmatic Approach to Collect Hateful and Offensive Expressions and Perform Hate Speech Detection</title>
		<author>
			<persName><forename type="first">H</forename><surname>Watanabe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bouazizi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ohtsuki</forename><forename type="middle">T</forename></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2018.2806394</idno>
		<ptr target="https://doi.org/10.1109/ACCESS.2018.2806394" />
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="13825" to="13835" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">A Review of Standard Text Classification Practices for Multi-label Toxicity Identification of Online Content</title>
		<author>
			<persName><forename type="first">I</forename><surname>Gunasekara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Nejadgholi</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/w18-5103</idno>
		<ptr target="https://doi.org/10.18653/v1/w18-5103" />
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="21" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Hanu</surname></persName>
		</author>
		<title level="m">Unitary team. Detoxify. Github</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Comparing Toxicity Across Social Media Platforms for COVID-19 Discourse</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">B</forename><surname>Noor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yousefi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Spann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2302.14270</idno>
	</analytic>
	<monogr>
		<title level="m">The Ninth International Conference on Human and Social Analytics HUSO 2023 -arXiv preprint</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Identifying toxicity within YouTube video comment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Obadimu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hussain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-21741-9_22</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-21741-9_22" />
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics</title>
		<imprint>
			<biblScope unit="page" from="214" to="223" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Developing an Epidemiological Model to Study Spread of Toxicity on YouTube</title>
		<author>
			<persName><forename type="first">A</forename><surname>Obadimu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Maleki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-61255-9_26</idno>
		<ptr target="https://doi.org/10.1007/978-3-030-61255-9_26" />
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics</title>
		<imprint>
			<biblScope unit="page" from="266" to="276" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Exploring Media Bias and Toxicity in South Asian Political Discourse</title>
		<author>
			<persName><forename type="first">A</forename><surname>Qayyum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Gilani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Latif</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qadir</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICOSST.2018.8632183</idno>
		<ptr target="https://doi.org/10.1109/ICOSST.2018.8632183" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of International Conference on Open Source Systems and Technologies</title>
				<meeting>International Conference on Open Source Systems and Technologies</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Identifying latent toxic features on YouTube using nonnegative matrix factorization</title>
		<author>
			<persName><forename type="first">A</forename><surname>Obadimu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Ninth International Conference on Social Media Technologies, Communication, and Informatics</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Developing a socio-computational approach to examine toxicity propagation and regulation in COVID-19 discourse on YouTube</title>
		<author>
			<persName><forename type="first">A</forename><surname>Obadimu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Khaund</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mead</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Marcoux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ipm.2021.102660</idno>
		<ptr target="https://doi.org/10.1016/j.ipm.2021.102660" />
	</analytic>
	<monogr>
		<title level="j">Information Processing and Management</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Toxicity and verbal aggression on social media: Polarized discourse on wearing face masks during the COVID-19 pandemic</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pascual-Ferrá</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Alperstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Barnett</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rimal</surname></persName>
		</author>
		<idno type="DOI">10.1177/20539517211023533</idno>
		<ptr target="https://doi.org/10.1177/20539517211023533" />
	</analytic>
	<monogr>
		<title level="j">Big Data and Society</title>
		<imprint>
			<biblScope unit="issue">8</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Determining Boundary Conditions of Social Influence for Social Networks Research</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chandrasekara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Sedera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Australasian Journal of Information Systems</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">High-risk deviant decisions: does neutralization still play a role?</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">S</forename><surname>Trinkle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Warkentin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Malimage</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Raddatz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Association for Information Systems</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">M</forename><surname>Aliapoulios</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bevensee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Blackburn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bradlyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>De Cristofaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Stringhini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zannettou</forename><forename type="middle">S</forename></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">A Large Open Dataset from the Parler Social Network</title>
		<idno type="DOI">10.5281/zenodo.4442460</idno>
		<ptr target="https://doi.org/10.5281/zenodo.4442460" />
	</analytic>
	<monogr>
		<title level="j">Zendo</title>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">What Is QAnon, the Viral Pro-Trump Conspiracy Theory?</title>
		<author>
			<persName><forename type="first">K</forename><surname>Roose</surname></persName>
		</author>
		<ptr target="https://www.nytimes.com/article/what-is-qanon.html" />
	</analytic>
	<monogr>
		<title level="j">NY Times</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">What is QAnon? What does WWG1WGA mean? The conspiracy theory that explains everything and nothing</title>
		<author>
			<persName><forename type="first">W</forename><surname>Rahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Patterson</surname></persName>
		</author>
		<ptr target="https://www.cbsnews.com/news/what-is-the-qanon-conspiracy-theory/" />
	</analytic>
	<monogr>
		<title level="j">CBS News</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Mcdonald</surname></persName>
		</author>
		<ptr target="https://www.factcheck.org/2020/08/new-plandemic-video-peddlesmisinformation-conspiracies/" />
		<title level="m">New &apos;Plandemic&apos; Video Peddles Misinformation, Conspiracies</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">FactCheck</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Peters</surname></persName>
		</author>
		<ptr target="https://www.theverge.com/2020/4/22/21231956/twitter-remove-covid-19-tweets-call-to-action-harm-5g" />
		<title level="m">Twitter will remove misleading COVID-19-related tweets that could incite people to engage in &apos;harmful activity</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>The Verge</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
