<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Double-edged Swords: The Good and the Bad of Privacy and Anonymity in Social Media</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Mainack</forename><surname>Mondal</surname></persName>
							<email>mainack@mpi-sws.org</email>
						</author>
						<title level="a" type="main">Double-edged Swords: The Good and the Bad of Privacy and Anonymity in Social Media</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">F50AE37307DD381FB218A5E89A7413BB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T02:20+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>anonymity</term>
					<term>privacy</term>
					<term>social media</term>
					<term>Whisper</term>
					<term>Twitter</term>
					<term>pattern recognition</term>
					<term>hate speech</term>
					<term>social sensors</term>
				</keywords>
			</textClass>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">NEED FOR PRIVACY AND ANONYMITY IN ONLINE SOCIAL MEDIA SITES</head><p>Online Social Media sites (OSMs) like Facebook and Twitter drastically changed the way users communicate with each other and share content. OSMs provide inexpensive communication medium that allows anyone to quickly reach millions of users. Consequently, in these platforms anyone can publish content and anyone interested in the content can obtain it, representing a transformative revolution in our society. However, the very strength of OSMs to easily reach millions of users also indicates that there might be dire consequences to the content publishers if it reaches wrong people. For example, A content creator might lose her job simply because her rant in Facebook might be viewed by her boss <ref type="bibr" target="#b2">[3]</ref> or even worse, Government organizations might press criminal charges against an activist solely based on her political opinions expressed in OSM posts <ref type="bibr" target="#b1">[2]</ref>. To that end, over time, as OSMs gradually become a medium for freedom of expression, specially in times of unrest <ref type="bibr" target="#b8">[9]</ref>, there is a strong need for protecting users and their freedom of expression. In other words, there is a need for more privacy and anonymity to the users in OSMs so that they can preserve their right to free speech without fear of repercussions from their Government or other authorities. However, there is a cost-more private and anonymous platforms can also be abused to hurt other users, e.g., via spreading hate, cyber bullying or trolling.</p><p>In this talk, we will emphasize that we can leverage the OSM data as social sensors to understand privacy and anonymity practices in social media and improve upon them. Moreover, we argue that privacy and anonymity are double-edged swords-needed by many (e.g., activists during Arab Spring) but also abused by some to harm others. To that end, we argue that these social sensors might enable the OSM operators to stop the abuse too. We would next expand upon how social sensors can be used to (i) understand the need for privacy and anonymity and (ii) limit the abuse of these technologies</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">UNDERSTANDING PRIVACY AND ANONYMITY NEEDS VIA SOCIAL SENSORS</head><p>We start with pointing out that OSM data acts as a valuable microscope to look into the privacy and anonymity needs for millions of users. Traditionally, these needs are explored in the Human Computer Interaction (HCI) community via semi-structured interviews and user surveys. However, OSMs provide us a tremendous opportunity in the form of social sensors to scientifically measure how millions of users are using (or abusing) privacy and anonymity in a real world setting.</p><p>Understanding privacy via social sensors: Plethora of legal, sociology, psychology and even philosophical scholars aimed to understand concrete aspects of privacy. However, their definitions provides us understanding of what is privacy, but not how privacy is enforced and used in practice. Using OSM data we can partially bridge this gap, particularly in social context. We propose exposure control <ref type="bibr" target="#b4">[5]</ref>, an improvement over current privacy management models. Exposure is simply defined as who actually views the content and controlling exposure satisfy many privacy needs of OSM users today. However exposure control is a theoretical model and we need to specifically understand how exposure is controlled in real world and what we can do to improve them.</p><p>To that end, social sensors enable us to measure how users are actually controlling their exposure today and point out the limitations of current mechanisms. We have leveraged social sensors to identify the usage of social access control lists (SACLs) in real world <ref type="bibr" target="#b5">[6]</ref>. Using real world SACL usage data we propose a simple cache-based mechanism to make SACLs more usable. Further, we looked into how users are protecting their longitudinal privacy by changing privacy settings of their historical data <ref type="bibr" target="#b6">[7]</ref>. We found that a surprisingly high number of users are controlling longitudinal exposure of their content-more than 30% of social content posted 6 years back is withdrawn by overs. However, using this same OSM data we identify some key problems with longitudinal mechanisms in OSMs today and propose improvements of longitudinal exposure control mechanisms. Our work demonstrates the usefulness of social sensors for understanding and improving privacy; However, there is a need to further understand other aspects of exposure in different social scenarios (e.g., privacy violation via social search); we note that our object in improving privacy can be achieved by further leveraging the enormous behavioral data from OSMs.</p><p>Understanding anonymity via social sensors: Anonymity is another need for OSM users that is becoming more and more important in recent years. E.g., during a political turmoil, activists or whistle-blowers want to reach millions of fellow citizens, but don't want to face the wrath of authorities who monitor OSMs for finding these activists and silence them. To that end, anonymous OSMs like Whisper, Yik-Yak or 4chan is becoming popular as mediums to exercise freedom of expression. However, it is important to understand the usage of these anonymous platforms to detect if millions of users (and not only a handful of activists) indeed use these platforms to post content which need anonymity (i.e., personal experiences or strong opinions). To that end, we collected large scale data from Whisper <ref type="bibr" target="#b0">[1]</ref> and compare this content with non-anonymous OSM, Twitter. Using these datasets as sensors we found that anonymity sensitivity of most whispers (posts from Whisper), unlike tweets (posts from Twitter) is not binary. Instead, most whispers exhibit many shades or different levels of anonymity. The content of whispers ranges from posting confession to opinions on LGBTQ. We also find that the linguistic differences between whispers and tweets are so significant that we could train automated classifiers to distinguish between them with reasonable accuracy. Our findings shed light on human behavior in anonymous media systems that lack the notion of an identity. Among other implications, these social sensors also open an exciting venue for us to understand the disinhibition effect, where users post content in presence of anonymity which they otherwise will not post.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">LIMITING ABUSE OF PRIVACY AND ANONYMITY VIA SOCIAL SENSORS</head><p>Privacy and anonymity, however, have a dark aspect too, which cannot be ignored in the current world. When OSMs enable people to express themselves privately and anonymously, there are always some users who abuse the systems and hurt others. Particularly, OSMs have become a fertile ground for inflamed discussions, that usually polarize 'us' against 'them', resulting in many cases of insulting and offensive language. There are cases where individuals are mentally scarred forever by public shaming on online media or received death threats. The situation is becoming so worrisome that many Governments are now taking active steps to stop online abuse. For instance, in UK, 43.5% of children between the ages of 11 and 16 were bullied on social sites <ref type="bibr" target="#b7">[8]</ref>.</p><p>We argue that we can leverage OSM data as sensors to detect abusive atrocities and thus this very data can be used to limit the abuse of OSMs. Specifically, we note that abusive acts like cyber bullying, trolling or hate speech take place on OSMs and thus, there is a chance to automatically detect and limit them right when they are posted. We present a proof-of-concept example for this idea: understanding hate speech in social media <ref type="bibr" target="#b3">[4]</ref>. We use sentence structures to create a high precision dataset of hate speech in OSMs. Using this dataset we investigate the types of hate that propagates in OSMs. We found that hate speech based on race, physical or behavioral features are common in OSMs. Moreover there is intra as well as inter country differences in the types of posted hate speech. Our findings demonstrate the effectiveness of sensing abuse using OSM data and hints at the possibility to improve upon abuse detection mechanisms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">FUTURE DIRECTIONS: A CALL FOR ACTION</head><p>Finally, we would like to conclude this talk with a call for action: leverage the available social sensors for improving privacy and anonymity in OSMs as well as keeping the abuse of these platforms at bay. We point out 3 high level directions:</p><p>Understanding privacy and anonymity requirements of users: OSMs provide researchers an unique opportunity to analyze astronomical amount of user generated data; The data can be leveraged as sensors to understand and improve upon aspects like privacy and anonymity. Specifically, this data can be used to find the mechanisms that users employ to control exposure of their data and check the effectiveness of those methods. Further data from anonymous OSMs like Whisper also can be used to understand the behavior of users in anonymous social media sites and can help in understanding the anonymity requirements. For e.g., an important question to investigate would be: how to measure and satisfy different anonymity requirements of users for different types of content?</p><p>Limiting abuse of OSMs leveraging big data: Another field that traditionally received less research focus than privacy and anonymity is to limit the abuse of OSMs. It is safe to say that, although becoming more and more important in recent years, research on detecting and limiting online abuse is still at its nascent stage. For example, what are different classes of online abuse? What are their concrete definitions and characteristics? Are there enough social signals in OSM data to detect abusive behavior? Can we build effective systems to limit these abuses in real time? In fact, a very first step might be methodological-how to automatically detect different types of abuse in online social media? We strongly feel that OSM data can help tremendously in solving both of these challenges and correctly leveraging this data paves a way towards a safer online environment.</p></div>		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The Many Shades of Anonymity: Characterizing Anonymous Social Media Content</title>
		<author>
			<persName><forename type="first">Denzil</forename><surname>Correa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Leandro</forename><forename type="middle">Araùjo</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mainack</forename><surname>Mondal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabracio</forename><surname>Benevenuto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krishna</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICWSM&apos;15</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="https://english.alarabiya.net/en/media/digital/2017/04/12/Iran-social-media-activists-held-on-security-charges-.html" />
		<title level="m">Iran social media activists held on &apos;security&apos; charges</title>
				<imprint>
			<date type="published" when="2017-05">2017. 2017. May 2017</date>
		</imprint>
	</monogr>
	<note>Facebook activism</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">Facebook</forename><surname>Fired</surname></persName>
		</author>
		<ptr target="http://people.com/celebrity/employees-who-were-fired-because-of-social-media-posts/" />
		<title level="m">20 Tales of Employees Who Were Fired Because of Social Media Posts</title>
				<imprint>
			<date type="published" when="2016-05">2016. 2016. Accessed on May 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A Measurement Study of Hate Speech in Social Media</title>
		<author>
			<persName><forename type="first">Mainack</forename><surname>Mondal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Leandro</forename><forename type="middle">Araújo</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabrício</forename><surname>Benevenuto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">HT&apos;17</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Beyond Access Control: Managing Online Privacy via Exposure</title>
		<author>
			<persName><forename type="first">Mainack</forename><surname>Mondal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Druschel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krishna</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alan</forename><surname>Mislove</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">USEC&apos;14</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Understanding and Specifying Social Access Control Lists</title>
		<author>
			<persName><forename type="first">Mainack</forename><surname>Mondal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yabing</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bimal</forename><surname>Viswanath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krishna</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alan</forename><surname>Mislove</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SOUPS&apos;14</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Forgetting in Social Media: Understanding and Controlling Longitudinal Exposure of Socially Shared Data</title>
		<author>
			<persName><forename type="first">Mainack</forename><surname>Mondal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Johnnatan</forename><surname>Messias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Saptarshi</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Krishna</forename><forename type="middle">P</forename><surname>Gummadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aniket</forename><surname>Kate</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SOUPS&apos;16</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="https://nobullying.com/cyberbullying-in-uk/" />
		<title level="m">The Startling Facts about Cyberbullying in the UK</title>
				<imprint>
			<date type="published" when="2017-05">2017. 2017. Accessed on May 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="https://mic.com/articles/10642/twitter-revolution-how-the-arab-spring-was-helped-by-social-media" />
		<title level="m">Twitter Revolution: How the Arab Spring Was Helped By Social Media</title>
				<imprint>
			<date type="published" when="2012-05">2012. 2012. May 2017</date>
		</imprint>
	</monogr>
	<note>Twitter arab stpring</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
