Security and Morality: A Tale of User Deceit L Jean Camp Cathleen McGrath Alla Genkina Indiana University College of Business UCLA School of Informatics Administration Information Studies +1-812-856-1865 Loyola Marymount University +1-310-216-2045 ljcamp@indiana.edu cmcgrath@lmu.edu alla@ayre.org 1. INTRODUCTION ABSTRACT 1.1. Overview There has been considerable debate about the apparent In the first section of this paper we review the irrationality of end users in choosing with whom to literature that inspired our trust experimentation. In the share information, with much of the discourse second section we describe our experiments. In the crystallized in research on phishing. Designs for third section we discuss the results of the security technology in general, anti-spam technology, experimentation. In the fourth section we describe the and anti-phishing technology has been targeted on potential implications of our results for the design of specific problems with distinct methods of mitigation. user interactions for risk communication. In contrasts, studies of human risk behaviors argue that such specific targets for specific problems are Safe, reliable, and secure computing requires unlikely to provide a significant increase in user trust empowered users. Specifically users must be of the internet, as humans lump and generalize. empowered to distinguish between trustworthy and We initially theorized that communications to users untrustworthy machines on the network [13]. Of need to be less specific to technical failures and more course, no machine that can be connected is perfectly deeply embedded in social or moral terms. Our secure. No home machine is without user information. experiments indicate that users respond more strongly To further complicate the transition, this evolution to a privacy policy failure than an arguably more risky must occur in a dynamic widely-deployed network. technical failure. From this and previous work we The capacity of humans as security managers depends conclude that design for security and privacy needs to on the creation of technology that is designed with be more expansive in that there should be more well founded understanding of the behavior of human bundling of signals and products, rather than more users. Thus systems must not only be trustworthy but delineation of problems into those solvable by discrete must also be identifiable as trustworthy. In order for tools. Usability must be more than the interface this to happen we must root system development in an design, but rather integrate security and privacy into a understanding of the cues that humans use to trust interaction. determine trustworthiness. Categories and Subject Descriptors The efficacy of trust technologies is to some degree a Computers and Society function of the assumptions of human trust behaviors in the network. Note that the definition of trust in this project is taken from Coleman’s [11] definition of General Terms rational actors’ decision to place themselves in Security, Management, and Experimentation vulnerable positions relative to others in the hope of accomplishing something that is otherwise not Keywords possible. Its operational focus fits well with the Security, Trust, Trustworthiness computer science perspective. In contrast it is explicitly not the definition of trust as an internal state where confidence is expressed behavior as seen in [17]. Building upon insights that have emerged from studies on human-computer interaction and game theoretic studies of trust we have developed a set of hypotheses on human behavior with respect to computer-mediated First, we narrow the larger question of security to the trust. We then test these hypotheses using an more constrained question of human trust behaviors. experiment that is based on proven social science Second, we extract from the larger literature testable methods. We will then examine the implications for hypotheses with respect to trust behaviors. Third, we technical design of the confirmation or rejection of the develop an experimental design where the trust hypotheses with the use of structured formal protocol behavior is a willingness to share information that give analysis. a basis for rejecting the testable hypotheses. Technical security experts focus on the considerable For this research, we use Coleman's [11] definition of technological challenges of securing networks, and trust that accounts for the rational action of individuals devising security policies. These essential efforts in social situations to structure the experimental would be more effective in practice if designs more situations which subjects will face. Coleman's systematically addressed the (sometimes irrational) definition of trust is operational and has four people who are critical components of networked components: information systems. Accordingly, efforts at securing 1. Placement of trust allows actions that these systems should involve not only attention to otherwise are not possible. machines, networks, protocols and policies, but also a 2. If the person in whom trust is placed systematic understanding of how the people (trustee) is trustworthy, then the trustor participate in and contribute to the security and trust of will be better off than if he or she had networks. not trusted. Conversely, if the trustee is not trustworthy, then the trustor will be worse off than if he or she had not 1.2 Theoretical Foundation trusted. The study of network security is the study of who can 3. Trust is an action that involves the be trusted for what action, and how to ensure a voluntary placement of resources trustworthy network. This understanding must build (physical, financial, intellectual, or upon not only the science and engineering of security, temporal) at the disposal of the trustee but also the complex human factors that affect when with no real commitment from the and how individuals are prepared to extend trust to the trustee. agents with whom they interact and transact - 4. A time lag exists between the extension computers, people and institutions. This is a problem of trust and the result of the trusting that has received much comment but little formal behavior. quantitative study [16, 25]. The view held by a number of researchers about trust Humans appear to be ill suited as computing security is that it should be reserved for the case of people managers. Arguments have been made for embedding only; that people can only trust (or not trust) other security in the operating system from the people; not inanimate objects. These researchers psychological perspective [25]. In addition there is a suggest that we use a term such as confidence or continuous debate about making the network more reliance to denote the analogous attitude people may trustworthy [10]. As technology becomes more hold toward objects such as computers and networks. complex, users develop simplified abstractions that To the extent that this is more than merely a dispute allow them to make sense of complicated systems [36] over word usage, we are sympathetic to the proposal but these flawed models may obfuscate vital security that there are important differences in the ways trust decisions. End-user security mechanisms may offer no versus confidence or reliance operate internally (See, more autonomy to the naive user than the option to for example, [28, 16]. Yet in terms of building perform brain surgery at home would offer medical mechanisms to create a trustworthy network we will autonomy to the naive patient. In fact, the argument investigate the way trust may be extended to both that alterable code is not empowering to the user has humans and objects. Note that there are been argued in the case of applications [10]. disagreements with respect to the definition and examination of trust. Trust is a concept that crosses Social science experiments provide insights for disciplines as well as domains, so the focus of the evaluating how trust mechanisms may succeed or fail definition differs. There are two dominant definitions when presented to the naïve user. That humans are a of trust: operational and internal. source of randomness is well-documented, and the problems of ‘social engineering’ well known. Yet the Operational definitions of trust like the one we are inclusion of the human behavior using tested using require a party to make a rational decision based axiomatic results is a significant extension to previous on knowledge of possible rewards for trusting and not research on why security and trust systems fail [1]. trusting. Trust enables higher gains while distrust avoids potential loss. Therefore risk aversion is a The experiment described here was built upon the critical parameter in defining trust. following theoretical construction of the problem. In the case of trust on the Internet operational trust hypotheses can be effectively applied to past technical must include both evaluation of the users intention – designs. benevolent or malevolent, and the users' competence. Particularly in the case of intention, the information This is a two-part research investigation. First, we test available in a physical interaction is absent. In the hypotheses that are explicit in the game theory- addition, cultural clues are difficult to discern on the based research on human trust behavior in the specific Internet as the face of most web pages are meant to be case of human/computer interaction. We test these as generic as possible to avoid offense. One hypotheses using standard experimental and operational definition of trust is reliance [19]. In this quantitative methods, as described in the first methods case reliance is considered a result of belief in the section. Second, based on these findings, we examine integrity or authority of the party to be trusted. the suitability of various distributed trust technologies Reliance is based on the concept of mutual self- in light of the findings of the first part of this study. interest. Therefore the creation of trust requires structure to provide information about the trusted party to ensure that the self-interest of the trusted party 1.3. Hypothesis Development is aligned with the interest of the trusting party. When We developed a core hypotheses under which the reliance is refined, it requires that the trusted party be technologies of trust and the perspectives on trust from motivated to insure the security of the site and protect social science converge. Essentially in contrast to the the privacy of the user. Under this conception trust is assumption that individuals make increasingly illustrated by a willingness to share personal complex decisions in the face of increasingly complex information. Camp [8] offers another operational threats, social science suggests that people are definition of trust in which users are concerned with simplifiers. The hypotheses at its core points to a risk rather than risk perception. From this perspective, common point of collision: technologists may embed trust exists when individuals take actions that make in the design of trust mechanisms implicit assumptions them vulnerable to others. that humans are attentive, discerning, and ever- rational. There are strong philosophical arguments A second perspective on trust used by social that humans are simplifiers, and this implies that psychologists, assumes that trust is an internal state. humans will use trust of machines to simplify an ever (e.g., [17]) From this perspective, trust is a state of more complex world. belief in the motivations of others. Based on this Hypothesis I: In terms of trust and argument, social psychologists measure trust using forgiveness in the context of computer- structured interviews and surveys. The results of the mediated activities, there is no interviews can find a high correlations between trust significant systematic difference in and a willingness to cooperate. Yet trust is not defined people's reactions to betrayals as but rather correlated with an exhibited willingness appearing to originate from malevolent to cooperate. This is in contrast to the working human actions, on the one hand, and definition underlying not only this work, but also most incompetence on the other. of the research referenced herein. The definition of trust used here and the set of methods used to explore According to this hypothesis people do not trust perfectly coincide and are based in the discriminate on the basis of the origins of harms such quantitative, game-theory tradition of experiments in as memory damage, denial of service, leakage of trust in which trust is an enacted behavior rather than confidential information, etc. In particular, it does not an internal state. matter whether the harms are believed by users to be the result of technical failure or human (or One underlying assumption is that, in addition to the institutional) malevolence. Indeed, the determination technical, good network security should incorporate an to avoid risks without concern of their origination is a increasingly systematic understanding of the ways characteristic of risk technology. people extend trust in a networked environment. Thus one goal of this experiment is to enable or simplify the The hypothesis makes sense from a purely technical design of systems enabling rational human trust standpoint. Certainly good computer security should behavior on-line by offering a more axiomatic protect users from harms no matter what their sources, understanding of human trust behavior and illustrating and failure to do so is bad in any case. Yet a second how the axioms can be applied. Therefore the goal of examination yields a more complex problem space. our experiment is to offer a way to embed social This more complex design space in turn calls for a understanding of trust as exhibited in human action more nuanced solution to the problem of key into the design of security systems. Yet before any revocation or patch distribution. concepts of trust are embedded into the technical infrastructure, any implicit hypotheses developed in What this means for our purposes is that people's trust studies of humans as trusting entities in relation to would likely be affected differentially by conditions computers must be made explicit and tested. Then it that differ in the following ways: cases where things is critical to illustrate by example how these are believed to have gone wrong (security breaches) as a result of unpredictable, purely technical glitches; cases where failures are attributed to technical were told that they were evaluating web pages as part shortcuts taken by human engineer; and thirdly cases of a business management class. . Students were where malevolence (or at least disinterest in another’s shown one web site (elephantmine.net), then a second situation) is the cause of harm. To briefly illustrate, a site (reminders.name). security breach that is attributed to an engineering error might be judged accidental and forgiven if things The services offered over the Web sites appear to be went wrong despite considerable precautions taken. life management services, that will require that Where, however, the breach is due to error that was individuals offer to provide information (e.g. birthday preventable, the reaction might be more similar to a of your spouse, favored gifts, grocery brand reaction to malevolence. Readers familiar with preferences, credit card number). After participants categories of legal liability will note the parallel viewed the web pages, they responded to a series of distinctions that the law draws between, for example, questions about their willingness to share information negligence versus recklessness. with the site. The survey determined the data the subjects were willing to provide to that domain. Our Our second hypothesis relates to the ability of services portals are designed to be similar in interface individuals to make distinctions among different but clearly different in source so that we can explore computers. Computers are of course, distinct, the question of user differentiation of threats. particularly once an operator has selected additional applications that will run on and policies that will This design has three fundamental components: trust, govern the information on the site. Publications in betrayal, trust. Subjects were told that they are social theory (e.g., [11, 31]) predict that individuals' evaluating e-commerce systems that will make their initial willingness to trust and therefore convey lives easier by managing gift-giving, subscription information in the context of a web form will depend management, bill-paying, grocery shopping, and dry- more on the characteristics of the individual and cleaning etc. They were be asked their willingness to interface than the perceived locality of or technology engage with such a company. Background underlying the web page. An empirical study of information will included overall computer experience computer science students also demonstrated that experiences. These questions included typical personal experience with computers increases a willingness to information as well as information about loved ones, expose information across the board [37]. daily habits, and preferences. Studies in human-computer interaction suggest that First we test the tendency for people trust to different users, even those with considerable knowledge and machines as illustrated by a willingness to share experience, tend to generalize broadly from their information, as is consistent with referenced work. experiences. Studies of off-line behaviors illustrate The two machines have different themes and different that such generalization is particularly prevalent in domain names. We showed that the machines are studies of trust within and between groups. Thus, distinct types by clearly identifying the machine with positive experiences with a computer may generalize visible labels (e.g. "Intel inside" and Tux the Linux to the networked system (to computers) as a whole penguin, vs. "Viao" and "powered by NT"). and presumably the same would be true of negative experiences. In other words, users may draw During the introduction of the second web page, there inductive inferences to the whole system, across is one of two types of “betrayal”. In the first, the computers, and not simply to the particular system betrayal is a change in policy that represents a with which they experienced the positive transaction. violation of trust in terms of the intention of the agent. Do individuals learn to distinguish between threats or Here the students were shown a pop-up window do they increase threat lumping behavior? announcing a change in privacy policy, and offered a redirection to a net privacy policy. In the second Hypothesis II: When people interact with condition, “betrayal” represented a violation of trust in networked computers, they discriminate terms of a display of incompetence on the part of the among distinct computers (hosts, websites), agent. One segment of students were shown a betrayal treating them as distinct entities, particularly that was another (imaginary) person’s data being in their readiness to extend trust and secure displayed on the screen. This illustrates a technical themselves from possible harms. inability to secure information. After each “betrayal”, we tested for more trust behaviors, again 2. EXPERIMENTAL DESIGN with trust behavior being defined as the willingness to We collected data on computer users' responses share information. to trustworthy and untrustworthy computer behavior by conducting real time experiments that measured 3. RESULTS individuals’ initial willingness to conveying personal The results of our experiment with users provides information in order to receive a service over the web, insight into our hypotheses regarding users’ responses and then examined student responses to betrayals. A to violations of trust. Table 1 shows the results for the total of 63 students participated in the study. They both conditions. Table 1. Users’ responses to betrayals Change in privacy policy Display other users’ private (Malevolence) information (Incompetence) Proportion Proportion Proportion Proportion willing to willing to willing to share willing to Type of information share before share after before share after Your credit card number 0.16 .09 ** 0.29 .13 ** Your Social Security number 0.03 0 0.03 0 Your year of birth 0.69 .59 *** 1 0.9 Your IM buddy list 0.22 .09 ** 0.16 .13 *** Your list of email contacts 0.13 .06 ** 0.23 .13 *** Your coworkers’ names 0.44 .31 *** 0.42 0.52 Your friend’s names 0.53 .34 *** 0.65 0.68 Your parents’ names 0.47 .28 *** 0.58 .55 *** Your family members’ names 0.47 .28 *** 0.68 .61 *** Your family members’ birthdays 0.66 .47 *** 0.87 .68 ** Your family’s wedding anniversaries 0.63 .47 *** 0.84 .68 *** Your family members’ shopping preferences 0.53 .38 *** 0.77 .71 *** ** p<.01 *** p<.001 In the first condition, there is a change in the privacy policy of the web page. We classify this as a violation The integration of the moral or ethical element is of trust intention. According to the first hypothesis, in noticeably absent in security technology design even terms of effects on trust in computers and computer- when there is an argument, without human interaction, mediated activity and readiness to forgive and move that such a policy would be good security practice. For on, people do not discriminate on the basis of the example, key revocation policies and software patches origins of harms such as memory damage, denial of all have an assumption of uniform technical failure. A service, leakage of confidential information, etc. In key may be revoked because of a flawed initial particular, it does not matter whether the harms are presentation of the attribute, a change in the state of an believed by users to be the result of technical failure, attribute, or a technical failure. Currently key on the one hand, or human (or institutional) revocation lists are monolithic documents where the malevolence. responsibility is upon the key recipient to check. Often, the key revocation lists only the date of In the second condition, participants saw that a revocation and the key. These experiments would fictional users’ information was displayed when the argue that the cases of initial falsification, change in webpage was opened. As shown in Table 1, after the status, and lost device would be very different and technical error demonstrating incompetence, would be treated differently. A search for possible participants were less willing to share information, but fraudulent transactions or a criminal investigation by a smaller margin than in the first case of a change would also view the three cases differently. Integrating in privacy policy. Despite the fact that the technical the reason for key revocation may make human failure indicated an inability to keep information reaction to key revocation more effective and is secure or secret or private, the refusal to share future valuable from a system as well as a human information far more dramatically decreased with the perspective. policy change. The second hypothesis, that individuals develop The data above illustrates that we have explicitly mechanisms to evaluate web sites over time and enter rejected the hypotheses that all failures are the same, each transaction with a new calculus of risk, cannot be with respect to human-driven and technical failures. supported by the evaluation. Each participant stated that they had at least seven years of experience of the 5. REFERENCES web, including commerce. If the approach to a web [1] Anderson, R. E., Johnson, D.G., Gotterbarn, D. site were one of careful updating of a slowly and Perrolle, J., 1993, "Using the ACM Code of developed boolean function of risk, then the alteration Ethics in Decision making," Communications of in the second case arguably would have been less the ACM, Vol. 36, 98- 107. extreme. After all, the betrayal happens at the first site, [2] Abric & Kahanês, 1972, "The effects of not the second. So every participant should begin at representations and behavior in experimental the second site at exactly the same state as the first, games", European Journal of Social Psychology, assuming each differentiates web sites rather than Vol 2, pp 129-144 reacting to experiences on “the net” as a whole. [3] Axelrod, R., 1994, The Evolution of Cooperation, HarperCollins, USA. Clearly there is no argument under which this data [4] Becker, Lawrence C. "Trust in Non-cognitive would support that argument. Individuals reacted Security about Motives." Ethics 107 (Oct. 1996): strongly and immediately to the betrayal at the first 43-61. site, despite being told that the first and second site [5] Blaze, M., Feigenbaum, J. and Lacy, J., 1996, were in no way related and were in fact competitors. "Decentralized Trust Management", Proceedings of the IEEE Conference on Security and Privacy, 4. CONCLUSIONS May. We have tested two hypotheses in human behavior [6] Bloom, 1998, "Technology Experimentation, and that can serve as axioms in the examination of the Quality of Survey Data", Science, Vol. 280, technical systems. Technical systems, as explained pp 847-848 above, embody assumptions about human responses. [7] Boston Consulting Group, 1997, Summary of Market Survey Results prepared for eTRUST, The Boston Consulting Group San Francisco, The experiments have illustrated that users consider CA, March. failures in benevolence as more serious than failures in [8] Camp, L.J. Trust & Risk in Internet Commerce, competence. This illustrates that distinguishing that MIT Press, 2000. security technologies that communicate state to the [9] Camp, L.J., Cathleen McGrath & Helen end user will be most effective if they communicate in Nissenbaum, “Trust: A Collision of Paradigms,” terms that indicate harm, rather than more neutral Proceedings of Financial Cryptography, Lecture informative terms. Systems designed to offer security Notes in Computer Science, Springer-Verlag and privacy, and thus indicating both benevolence and (Berlin) Fall 2001. competence, are more likely to be accepted by users. [10] Clark & Blumenthal, "Rethinking the design of Failures in such systems are less likely to be tolerated the Internet: The end to end arguments vs. the by users, and users are less likely to subvert such brave new world", Telecommunications Policy systems. Research Conference, Washington DC, September 2000. As the complexity and extent of the Internet expands [11] Coleman, J., 1990, Foundations of Social Theory, users are increasingly expected to be active managers Belknap Press, Cambridge, MA. of their own information security. This has been [12] Compaine B. J., 1988, Issues in New Information primarily conceived in security design as enabling Technology, Ablex Publishing; Norwood, NJ. users to be rational about extensions of trust in the [13] Computer Science and Telecommunications network. The truly rational choice is for security Board, 1999, Trust in Cyberspace, National designers to embed sometimes irrational but consistent Academy Press, Washington, D.C. human behaviors into their own designs. [14] Dawes, McTavish & Shaklee, 1977, “Behavior, communication, and assumptions about other The consideration of people's responses to computers people's behavior in a commons dilemma can be seen as drawing not only on the social sciences situation,” Journal of Personality and Social generally but specifically on design for values in its Psychology, Vol 35, pp 1-11 consideration of social determination. In the [15] Foley, 2000, "Can Micrsoft Squash 63,000 Bugs viewpoint of the social determinist, technology is in Win2k?", ZDnet Eweek, on-line edition, 11 framed by its users and adoption is part of the February 2000, available at innovative process. That is to say, that designs are http://www.zdnet.com/eweek/stories/general/0,11 based on a post-hoc analysis of technologies after they 011,2436920,00.html. have been adopted [16]. Beyond identifying flaws of [16] Friedman, P.H. Kahn, Jr., and D.C. Howe, "Trust security mechanisms we hope to offer guidance in the Online," Communications of the ACM, December analysis of future systems. It would be unwise to wait 2000/Vol. 43, No.12 34-40. until a security mechanism is widely adopted to [17] Fukuyama F., 1996, Trust: The Social Virtues consider only then how easily it may be undermined and the Creation of Prosperity, Free Press, NY, by "human engineering.”. NY. [18] Garfinkle, 1994, PGP: Pretty Good Privacy, [28] Office of Technology Assessment, 1986, O'Reilly & Associates, Inc., Sebastopol, CA, pp. Management, Security and Congressional 235-236. Oversight OTA-CIT-297, United States [19] Golberg, Hill & Shostak, 2001 “Privacy, ethics, Government Printing Office; Gaithersburg, MA. and trust” Boston University Law Review, V. 81 [29] Seligman, Adam. The Problem of Trust. N. 2. Princeton: Princeton University Press, 1997 [20] Hoffman, L. and Clark P., 1991, "Imminent [30] Slovic, Paul. "Perceived Risk, Trust, and policy considerations in the design and Democracy." Risk Analysis 13.6 (1993): 675-681 management of national and international [31] Sproull L. & Kiesler S., 1991, Connections, The computer networks," IEEE Communications MIT Press, Cambridge, MA, 1991 Magazine, February, 68-74. [21] Keisler, Sproull & Waters, 1996, "A Prisoners [32] Tygar & Whitten, 1996, "WWW Electronic Dilemma Experiments on Cooperation with Commerce and Java Trojan Horses", People and Human-Like Computers", Journal of Proceedings of the Second USENIX Workshop Personality and Social Psychology, Vol 70, pp on Electronic Commerce, 18-21 Oakland, CA 47-65 1996, 243-249 [22] Kerr & Kaufman-Gilliland, 1994, [33] United States Council for International Business, "Communication, Commitment and cooperation 1993, Statement of the United States Council for in social dilemmas", Journal of Personality and International Business on the Key Escrow Chip, Social Psychology, Vol 66, pp 513-529 United States Council for International Business, [23] Luhmann, Niklas. "Trust: A Mechanism For the NY, NY. Reduction of Social Complexity." Trust and [34] Wacker, J.,1995, "Drafting agreements for secure Power: Two works by Niklas Luhmann. New electronic commerce" Proceedings of the World York: John Wiley & Sons, 1979. 1-103. Wide Electronic Commerce: Law, Policy, [24] National Research Council, 1996, Cryptography's Security & Controls Conference, October 18-20, Role in Securing the Information Society, Washington, DC, pp. 6. National Academy Press, Washington, DC. [35] Walden, I., 1995, "Are privacy requirements [25] Nikander, P. & Karvonen, “Users and Trust in inhibiting electronic commerce," Proceedings of Cyberspace. Lecture Notes in Computer Science, the World Wide Electronic Commerce: Law, Springer-Verlag (Berlin) 2001. Policy, Security & Controls Conference, October [26] Nissenbaum, H. "Securing Trust Online: Wisdom 18-20, Washington, DC, pp. 10. or Oxymoron?" Forthcoming in Boston [36] Weick, K. “Technology as Equivoque: University Law Review Sensemaking in new technologies” In Goodman, [27] Office of Technology Assessment, 1985, L. Sproull, eds. “Technology and Organizations. Electronic Surveillance and Civil Liberties OTA- 1990. CIT-293, United States Government Printing [37] Weisband, S. & Kiesler, S. (1996). Self Office; Gaithersburg, MA. Disclosure on computer forms: Meta-analysis and implications. Proceedings of the CHI '96 Conference on Human-Computer Interaction, April 14-22, Vancouver.