The Tension Between Information Justice and Security: Perceptions of Facial Recognition Targeting Elizabeth Anne Watkinsa a Center for Information Technology Policy, Princeton University, Princeton, NJ USA Abstract In the discourse on human perceptions of algorithmic fairness, researchers have begun to analyze how these perceptions are shaped by sociotechnical context. In thinking through contexts of work, a half-century of research on organizational decision-making tells us that perceptions and interpretations made within these spaces are highly bounded by surrounding contextual constraints. In this paper I report early findings from a survey I conducted to bridge these two conversations, and scrutinize real-world perceptions of algorithmic decision-making in situ in a space of work. I analyze these perceptions through the case of facial recognition (or more accurately, facial verification) as account verification in gig work. In this survey I asked 100 Uber drivers, who all had been actually subjected to Uber’s facial verification process known as Real Time Check ID, their fairness perceptions of this process. I designed the survey to elicit their perceptions across five disparate dimensions of justice: informational, distributive, procedural, reciprocal, and interactional. I also asked them about their strategies for integrating Real Time Check ID into their work flow, including efforts at repair when the system breaks down and their potential preferences for subversive practices. Of those workers who report engaging in subversive tactics to avoid facial recognition, such as taking a picture of their car seat, their hand, or their passenger instead of their own face, one dimension of fairness elicited worse perceptions than any other: informational justice, a.k.a. transparency, of facial recognition targeting (the process for deciding which workers trigger this extra layer of verification). This research reveals tensions between transparency, security, and workers’ perceptions of the “fairness” of an algorithmic system: while “too much” transparency into how workers are targeted for verification may permit bad actors to defraud the system, “too little” explanation, this research shows, is no solution either. Results have crucial implications for the allocation of transparency and the design of explanations in user-facing algorithmic fraud detection, which must address tensions between information justice and security. Keywords 1 Facial recognition, facial verification, biometric, algorithmic fairness, transparency, explainability, security, sociotechnical 1. Introduction reconciliation [24]. The trouble is, machine learning success is contingent on building predictive accuracy, and predictive accuracy Questions around the fairness of algorithmic based on historical data is often antithetical to decision-making grow urgent as these systems confirming human values. Values, after all, proliferate across domains. "Fairness" suffers change over time, yet the definition, collection, from what's called the "impossibility theorem," and labeling of different metrics as required for describing the multiplicity of fairness machine learning elevate and bake into system definitions — no fewer than 21 different design the values of specific stakeholders, in definitions by one measure — and the power at the time and place of design. impossibility of their simultaneous Joint Proceedings of the ACM IUI 2021 Workshops, April 13-17, 2021, College Station, USA EMAIL: ew4582 [at] princeton.edu ORCID: https://orcid.org/0000-0002-1434-589X © 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Wor Pr ks hop oceedi ngs ht I tp: // ceur - SSN1613- ws .or 0073 g CEUR Workshop Proceedings (CEUR-WS.org) Prediction models provide binary outputs: in When another driver replied, “Why would you terms of facial identification for account need to beat facial recognition?” the first verification in gig work, for example, the only responded, “Any tactic used by a driver to outputs possible are either “the worker’s face combat the crooked set up is fair game” [34]. matches the photo on file,” or “the worker’s To shed light on the entanglement between face doesn't match the photo on file,” as perceptions of algorithmic fairness, delimited by a specified confidence threshold. sociotechnical context, and organizational At some point the developers of this model had decision-making, I designed a survey to to choose a threshold, to balance between too illuminate relationships between how workers many false positives (where a low confidence perceive the fairness of Real Time Check ID, threshold may yield cases where “this face across two involved processes: facial matches” the profile picture, even in cases recognition targeting, i.e. the process deciding where it didn't), against false negatives (where who gets chosen to comply with the protocol, an overly high threshold yields cases where and verification, i.e. the actual process using “this face doesn't match” even in cases where it biometric data to verify identity. The goal of did). It’s immediately apparent that when this survey is to gather fairness perceptions deciding where to place the confidence across a set of dimensions drawn from literature threshold, and which way to err in cases of in human perceptions of fairness and error, high-power stakeholders like developers, organizational justice, as well as workers’ executives, shareholders and even customers practices, negotiations, and tactics around this might have very different ideas of what technology. In this paper, I share descriptive constitutes a “fair” balance than workers, who findings comparing workers’ fairness have less say over system design. perceptions and their behaviors, with a focus on A stream of research has begun to address perceptions of information justice, i.e. how definitions of fairness are social constructs “explanations provided to people that convey [4, 11], and the social, contextual conditions information about why procedures were used” which contribute to the construction of fairness to arrive at a decision [8]. Once we have data perceptions [17, 18, 35]. I contribute to this around perceptions and behaviors, we can begin discourse by broadening the unit of analysis [2] to explore the relationships between respective of fairness perceptions, to incorporate this perceptions, and identify whether specific wider context in my study of gig workers and perceptions have explanatory power for their interpretations of the fairness of facial workers' behaviors. recognition.2 To bridge this work theoretically, I incorporate a theory of organizational 2. Human Perceptions of decision-making known as the “fairness heuristic theory” [20]. Algorithmic Fairness in Prior research on gig workers’ online Sociotechnical Context discussions of facial recognition illuminated potential for a rare empirical look at fairness An emerging body of research is beginning interpretations in situ. In particular, qualitative to uncover how social context and human analysis of Uber drivers’ online discussions of perceptions of algorithmic fairness interact. facial recognition reveal that when they talk The interwoven nature of perceptions, about their experiences with the technology, or behaviors, and structures which either constrain “Real Time Check ID” as the protocol is called, or promote these cognitive and functional they discuss — some obliquely, some directly processes contribute to the messy realities of — terms related to “fairness.” This is especially institutional life for both algorithms and users. apparent among workers who disagree with the These studies have been essential to establish requirement. In one salient example, one driver that users’ ideas of fairness are highly posted to an online discussion board the contextual social constructs [35] and that even question, “How can I beat facial recognition?” within equitable contexts algorithms can be 2 Facial recognition is a general term which can apply to both the face in the driver’s profile photo). Because I queried drivers facial verification (1:1 matching) as well as identification (1:n about two distinct algorithmic protocols involving verification as matching). The protocol I analyze in this research is more well as targeting I use “facial recognition” as an umbrella term accurately described as facial verification (i.e. 1:1 matching, for both processes. I also use “selfie verification” as it’s verifying that a face in an image uploaded from a phone matches colloquially called by drivers. subject to different interpretations [4]. Workers 3. Decision-Making in have little control over the algorithms that manage their work, and their perceptions of the Organizations fairness of algorithms within a work context are influenced by the relative control they’re Organizational theorists have long posited granted over the algorithmic decision [16]. that decision-making in organizations is a Beunza [3] suggests that, when workers are messy process, heavily contingent on local directed by an algorithm that they perceive as interpretations, values, and available unfair, this may increase their willingness to knowledge. Decisions around technology in engage in unethical behavior. Several organizations are no exception. Theorists in this researchers have drawn important frameworks school have drawn from language in Science, from management science literature, in Technology, and Society (STS) Studies, to particular the concept of organizational justice better understand how the social and the [11]. Organizational justice offers useful technical are co-constitutive in spaces of work handles for analyzing multiple dimensions of [26]. Local practices and interpretations are how a worker is treated by a single algorithm influential components of how new technology within an organization [17]. One key insight is becomes integrated into organizations [1, 25]. that users perceive algorithmic fairness along Workers alter their use of technologies multiple dimensions, including: 1) distributive depending on what they believe the tools are justice, i.e. fairness in the distribution of for, and resist prescriptive uses (i.e. directives outcomes, 2) procedural justice, i.e. fairness of from the organization) if those directives don’t the logic underpinning a decision, 3) match their interpretations [19]. The power of informational justice, i.e. understanding how interpretation can also be seen through the decisions are made, and 4) interactional justice, multiple accounts of friction and resentment i.e. whether participants feel decision-making created when workers’ interpretations of a processes treat them with dignity and respect technology’s use don’t match those of the [4]. organization [7, 16]. The theory of bounded When it comes to operationalizing context rationality, from mid-century organizational for research purposes, participants in such theory, illuminates how risk and uncertainty studies have most often been presented with prevent anyone in an organization from making speculative scenarios on which to base their perfectly rational decisions [32]. perceptions. In available survey and Organizational inputs and outputs are participatory workshop studies, participants unpredictable, and information is never perfect have been given a fictional second-person and often incomplete. scenario, such as, "imagine you are applying for How do organizational members make a personal financial loan," or "promotion at choices in such a chaotic environment? work," or "car insurance premiums," [4]. Some “Fairness heuristic theory” [20] argues that studies go further, and recruit participants from people working in an organization address the marginalized communities, and ask them how cognitive load of decision-making, in particular they feel about the types of contexts in which tensions between their individual autonomy and algorithmic decisions often take place, again their group identity, by drawing on a cognitive via speculative scenarios (i.e. discriminatory shortcut of fairness derived from their advertising) [35]. This is where my research perceptions of how fairly they’ve been treated picks up the torch and contributes to the in other decisions in the organization. The discourse on algorithmic fairness fairness heuristic provides a framework to in sociotechnical context: by investigating scrutinize users’ perceptions in AI-infused fairness perceptions in situ, in real contexts sociotechnical systems of work, and the with people actually engaged in and subject relationship between these perceptions and to the actual algorithms about which subsequent behaviors. My first step is to assess they're queried. In this research, I focus on the fairness perceptions across multiple dimensions context of work. and organizational processes, to establish a basis of comparison. This research embraces a sociotechnical lens on algorithmic fairness, recognizing the intertwined influence of the social and technical components of a system [9]. What a technology means, how it’s five types of fairness across of Real Time interpreted and communicated, and what Check ID targeting and verification. I adapted problems it addresses, arise from the interplay fairness perception questions from recent work of both social and technical aspects of a system. [4] drawing on the psychology of justice Without attention to the local, contingent nature research, identifying the multiple, simultaneous of knowledge in sites of integration, the design parameters of fairness perception: Procedural of algorithmic systems is likely to suffer from a justice concerns the processes of making a number of abstractions which may eventually decision. Distributive justice concerns how damage actors in that system [30]. results are allocated across participants or between groups, also known in some circles as 4. Background outcome fairness. Interactional justice concerns the extent to which the affected individual is treated with respect by the Facial recognition as a form of account decision-makers. Informational justice pertains security has been used by Uber on drivers under to the information available to participants the name “Real Time Check ID” since it was about the decision-making process. Reciprocal rolled out in select regions in 2016.3 This justice pertains to individuals’ comparisons involves two primary interwoven algorithmic between their inputs versus their outputs in their processes: 1) targeting, which uses behavioral involvement with an exchange, and whether data and algorithmic modeling to detect they’re getting a fair return for their efforts. The potentially fraudulent activity on the app and last dimension, of Reciprocal justice, was select drivers whose behavior is flagged for motivated by literature examining perceptions additional checks, and 2) verification, the of fairness within organizations, in particular algorithmic processes which verify, or confirm, around equity [13]. I added this dimension to that the face of the person behind the wheel and capture perceptions related to the labor that logged into the platform matches the photo on users put into selfie verification in exchange for the profile, of the person who's been approved access to the platform, and to recognize that to work by Uber. workers do not passively receive an algorithmic What does this look like in practice? When judgment but rather are active creators and a driver logs onto the app, a circle pops up on maintainers of the conditions that allow the screen with a text command to the driver to algorithms to make that judgment. position their face inside the circle. A photo is All five of these fairness dimensions were taken using the driver’s phone camera. That worded into statements for each decision- photo (or a data-based representation of that making process, for which participants could photo) is sent to Microsoft, where computer- select a response along a five-point scale of vision software-as-a-service compares it with “Strongly Agree,” “Somewhat Agree,” the “official” photo of the driver on the account. “Neither Agree nor Disagree,” “Somewhat If it is decided that the faces “match,” the driver Disagree,” or “Strongly Disagree,” as follows: is considered “verified,” and allowed to log into the platform and start working. If it is decided Real Time Check ID Targeting: that the faces “don’t match,” the driver must take another photo. They are not permitted to Distributive: The process for deciding who log into the platform until a match is reached. gets chosen for selfie verification treats all They must either continue to re-take photos drivers equally. until they get verified, or, if verification Informational: I understand how drivers are continues to fail, go to a customer-service chosen for selfie verification. center in person to get assistance. Interactional: The process for how drivers are chosen for selfie verification shows respect 5. Survey Methods for me and the work that I do. Reciprocal: The benefits that I receive from selfie verification are fair, as compared with the This descriptive, exploratory survey time and effort I spend operating and/or measures and compare workers’ perceptions of trouble-shooting. 3 Implementation of this protocol varies in accordance with local data protections, drivers can request human review instead of data protection legislation. In the U.K., for example, with GDPR algorithmic review. Procedural: The way Uber decides which redeemable towards air miles and gift cards. drivers get the pop-up for selfie verification is a Recruitment quotas for race, ethnicity, and self- fair way to decide who has to verify their identified gender were established using the account. most recent available data on Uber’s labor market in the United States [12] to gather a Real Time Check ID Verification: population resembling, as much as possible, the U.S. Uber labor market. Distributive: Selfie verification treats all drivers equally. Informational: I understand how selfie 6. Results verification works to verify my identity. Interactional: Selfie verification shows One hundred participants took part in this respect for me and the work that I do. survey over the course of five days in July 2020. Reciprocal: The benefits that I receive from This is a small dataset, and is not intended to be selfie verification are fair, as compared with the taken as statistically representative of the time and effort I spend operating and/or entirety of the U.S. Uber labor market. trouble-shooting. However, this data is proportionally similar to Procedural: Using selfie verification to that market, can provide some descriptive idea verify a driver's identity is a fair way to ensure of the perceptions of drivers in a group, and can that drivers' accounts are secure. lend clarity to potential next steps in research. In accordance with survey practices set by Pew Participants for this survey were recruited and others, race and ethnicity were broken into and participated in this survey using the separate questions for the survey, with Hispanic platform Qualtrics. A number of platforms exist ethnicity in a separate question from that of race for researchers to recruit survey participants, all identity. These choices were then re-condensed of which feature different benefits and for analysis purposes to assure that the final drawbacks. While some findings vary as to dataset resembled available information on the which web-based survey platforms yield results Uber U.S. labor market. All group that are representationally similar to the U.S. representation resembles their representation in population, others have found that Qualtrics’ the United States Uber driver population along participant population is the most lines of race and gender identity to within three demographically and politically representative percentage points. The use of screening [5]. Because demographics and racial justice questions ensured that 100% of participants have been so important to recent studies in self-report currently driving for Uber and that algorithmic bias, in particular facial they have actually been required at least once to recognition, Qualtrics was selected as the comply with Real Time Check ID. 100% are platform both for recruitment and for the survey within the United States to assure itself. All participants had previously signed up proportionality to available statistics on Uber’s to be available potential participants on labor market. The survey totaled 34 questions Qualtrics. Typically, respondents choose to join covering drivers’ experiences and perceptions, a survey through a double opt-in process. Upon with some demographic questions. Average registration, they enter basic data about time to complete the survey was 3.05 minutes, themselves, including demographic with a median time of 4.03 minutes. information, education, job title, hobbies, and Seven percent of participants were under the interests. Whenever a survey is created that that age of 25, 34% were 25-34, 48% were 35-44, individual would qualify for based on the seven percent were 45-54, and four percent information they have given, they are notified were 66-64. via email and invited to participate. The email In their responses to fairness statements, one invitation is generic, with no specifics as to the type of statement provoked a far lower rate of topic of the survey itself. They are told that they agreement than any other statement across any qualify for a survey, given a link, and asked to dimension or process (see Figure 1): drivers follow the link if they would like to participate. disagree most with the statement on They are also told the duration of the survey. information justice of Real Time Check ID Participants in this survey were compensated targeting. This fairness statement was phrased for their time and effort on a points system, as “I understand how drivers are chosen for selfie verification,” a colloquial way of describing a state in which drivers are provided with enough information to understand how targeting decisions are made. Put simply, drivers disagree with the idea that targeting decisions made about them are transparent or understandable. Figure 2: Drivers’ total rates of agreement specifically with information justice statements about Real Time Check ID targeting (top) and verification (bottom) for all respondents (dark grey) and “subversive” drivers, i.e. those who report they prefer not to comply (light grey). Figure 1: Drivers’ total rates of agreement with Among the entire set of drivers, the total rate fairness statements about Real Time Check ID of agreement around the information justice of targeting (top) and verification (bottom) across Real Time Check ID targeting is 74%; among five dimensions of justice. subversive drivers, that rate drops to 68.9%. Among all drivers, the total rate of agreement What relationship, if any, can be seen around the information justice of verification between this perception and drivers’ behaviors? is 86%. Among subversive drivers, that To gather information about behaviors, I asked rate drops to 79.3%. a subset of drivers (n=75) about their Real Time Check ID compliance preferences, with the 7. Conclusion and Discussion question “In the event that you PREFER not to comply with selfie verification, which tactics This research describes three findings have you used?” Nearly two-thirds (65.3%) regarding drivers’ perceptions and behaviors responded that they had used subversive tactics around the fairness of algorithmic security to avoid showing their face to the camera. Over protocols in their work. First, among five one-third (38.7 percent) reported that they had justice statements around the protocols of facial submitted, instead of their face, “an unusual recognition (Real Time Check ID targeting and photo with selfie verification hoping the system verification), the lowest rates of driver will approve it (like of the car seat, or agreement concern the information justice of the passenger).” Real Time Check ID targeting. Drivers disagree I then looked specifically at this group’s at comparatively high rates with statements that information justice perceptions. Among these they understand how they’re being targeted for “subversive” drivers, rates of agreement with selfie verification. information justice statements about targeting Second, this research yields potential and verification drop, across the board evidence for an observable relationship (see Figure 2). between perceptions and their subsequent behaviors. Among workers who self-report that they prefer not to comply with facial recognition protocols (and act on that preference), there is a marked drop in agreement with information justice statements regarding targeting, as compared to the entire group of participants. Third, within this group, similar drops are observable across two disparate algorithmic processes: Real Time Check ID targeting as well as Real Time Check ID verification. More research is needed to better understand whether such similarities constitute a fairness heuristic. and effective security protocols. A tension Such research may investigate for example emerges between informational justice for whether drivers have similar fairness drivers and the need to obscure the details of perceptions across different processes of how Real Time Check ID targeting works. If algorithmic management such as task allocation targeting algorithms processes and factors were and pricing, or whether fairness perceptions of made more transparent to drivers, this might algorithmic processes have any relationship become a vulnerability which bad actors could with subsequent behaviors for other processes exploit. “Too much” transparency into how related to that workflow or work platform. drivers are targeted for verification may permit It’s difficult to overstate how much gig work bad actors to defraud the system. Yet, “too platforms use information asymmetries and little” explanation, this research shows, is no algorithmic mechanisms in order to re-allocate solution either: a lack of information justice risk, uncertainty, and ambiguity onto workers (what we could call, information injustice) [21, 27, 28, 22]. The passage of Proposition 22 seems to correlate with “subversive” practices, in California has in particular cemented the which may be categorized as deviance and definition of gig work there as contract work result in drivers being barred from the platform. without employer protections. Research about These results have crucial implications for “fairness” perceptions of algorithmic protocols the design of explanations and transparency in in such contexts must recognize these user-facing algorithmic fraud detection, which asymmetries, and further, how the availability must address tensions between informational (or lack thereof) of other options may influence justice and security. how “fair” workers perceive their algorithmic options to be. Future research could explore Acknowledgments how precarity and economic reliance contribute to perceptions of transparency, fairness, This work was made possible by support and justice. provided by the UC Berkeley Center for Long- A wealth of research has demonstrated how Term Cybersecurity, for their symposium Data recognition technologies are built on extractive Rights, Privacy, and Security in AI. Thanks to means for carceral ends, using surveillance the participants of that symposium, especially technologies to build ever-larger datasets which Shazeda Ahmed, Richard Wong, and Karen met out harms due to the faulty functionality of Levy. Support was also provided via a travel the technology, largely for Black, Indigenous, grant from the ACM Conference on Fairness, and people of color [6, 28]. The market for Accountability, and Transparency (FAccT). emotion recognition, for example, continues to Invaluable feedback was provided by Susan grow, despite the fact that the scientific McGregor and Amanda Lenhart. This work foundation for such recognition claims are would not be possible without the guidance of dubious at best and violate human rights at Kelly Caine and the feedback of her graduate worst [23]. Recognition technologies also students at Clemson University. Additional produce violence for groups such as trans and nonbinary people through the functional design thanks to the AI on the Ground group at the Data & Society Research Institute, including of categorization, which is a poor match for the Madeleine Elish, Emanuel Moss, Jake Metcalf, recognition of human identity which is fluid and Ranjit Singh, and the CITP group at and malleable [15, 31]. Black and transgender Princeton, especially Amy Winecoff. Lastly, workers have already been locked out of Uber thanks to the anonymous reviewers’ generous due to Real Time Check ID [14, 33]. Recent and thoughtful feedback. research has brought about critically needed legislative responses to protect people. More research is needed into how the intrinsic References injustices of recognition technology intersect with the injustices of algorithmic management [1] Bailey, D.E. and Leonardi, P.M., and precarious work to produce distributed 2015. Technology choices: Why harms. occupations differ in their embrace of new Specifically, these findings provoke technology. MIT Press questions about the relationships between algorithmic transparency, information justice, [2] Barad, K., 2014. Diffracting diffraction: National Bureau of Economic Research Cutting together-apart. Parallax, 20(3), Working Paper Series. Accessed at pp.168-187. https://www.nber.org/papers/w22843.pdf [3] Beunza, Daniel. 2019. Taking the Floor: [13] Joshi, K. 1989. The Measurement of Models, Morals, and Management in a Fairness or Equity Perceptions of Wall Street Trading Room. Princeton: Management Information Systems Users. Princeton University Press. MIS Quarterly , Sep., 1989, Vol. 13, No. 3 [4] Binns, R., Van Kleek, M., Veale, M., (Sep., 1989), pp. 343-358 Lyngs, U., Zhao, J. and Shadbolt, N., 2018, [14] Kersley, Andrew. 2021. WIRED. April. 'It's Reducing a Human Being to a "Couriers say Uber’s ‘racist’ facial Percentage' Perceptions of Justice in identification tech got them fired." Algorithmic Decisions. In Proceedings of https://www.wired.co.uk/article/uber- the 2018 Chi conference on human factors eats-couriers-facial-recognition in computing systems (pp. 1-14). [15] Keyes, O., 2018. The misgendering [5] Boas, T.C., Christenson, D.P. and Glick, machines: Trans/HCI implications of D.M., 2020. Recruiting large online automatic gender recognition. Proceedings samples in the United States and India: of the ACM on Human-Computer Facebook, mechanical turk, and qualtrics. Interaction, 2(CSCW), pp.1-22. Political Science Research and Methods, [16] Lebovitz, S., Levina, N. and Lifshitz- 8(2), pp.232-250. Assaf, H., 2019. Doubting the diagnosis: [6] Buolamwini, J. and Gebru, T., 2018, How artificial intelligence increases January. Gender shades: Intersectional ambiguity during professional decision accuracy disparities in commercial gender making. Available at SSRN 3480593. classification. In Conference on fairness, [17] Lee, M.K., Jain, A., Cha, H.J., Ojha, S. and accountability and transparency (pp. 77- Kusbit, D., 2019. Procedural justice in 91). algorithmic fairness: Leveraging [7] Christin, A., 2017. Algorithms in practice: transparency and outcome control for fair Comparing web journalism and criminal algorithmic mediation. Proceedings of the justice. Big Data & Society, 4(2), ACM on Human-Computer Interaction. p.2053951717718855. 3(CSCW), pp.1-26. [8] Colquitt, J. A., Conlon, D. E., Wesson, M. [18] Lee, M.K., Kusbit, D., Metsky, E. and J., Porter, C. O. L. H., & Ng, K. Y. 2001. Dabbish, L., 2015, April. Working with Justice at the millennium: A meta-analytic machines: The impact of algorithmic and review of 25 years of organizational justice data-driven management on human research. Journal of Applied Psychology, workers. In Proceedings of the 33rd annual 86, 425-445. ACM conference on human factors in [9] Elish, Madeleine Clare and Elizabeth computing systems (pp. 1603-1612). Anne Watkins. 2020. Repairing [19] Leonardi, P.M., 2009. Why do people Innovation: A Study of Integrating AI in reject new technologies and stymie Clinical Care (New York: Data & Society organizational changes of which they are Research Institute) in favor? Exploring misalignments [10] Griesbach, K., Reich, A., Elliott-Negri, L. between social interactions and and Milkman, R., 2019. Algorithmic materiality. Human Communication control in platform food delivery Research, 35(3), pp.407-441. work. Socius, 5, p.2378023119870041. [20] Lind, E.A., 2001. Fairness heuristic [11] Grgic-Hlaca, Nina, Elissa M. Redmiles, theory: Justice judgments as pivotal Krishna P. Gummadi, and Adrian Weller. cognitions in organizational 2018. "Human perceptions of fairness in relations. Advances in organizational algorithmic decision making: A case study justice, 56(8). of criminal risk prediction." [21] Moradi, P. and Levy, K., 2020. The Future In Proceedings of the 2018 World Wide of Work in the Age of AI: Displacement or Web Conference, pp. 903-912. Risk-Shifting?. The Oxford Handbook of [12] Hall, J. and Alan Krueger. 2016. An Ethics of AI. pp. 271-87 (Markus Dubber, Analysis of the Labor Market for Uber’s Frank Pasquale, and Sunit Das, eds.) Driver-Partners in the United States. [22] Moss, E. and Metcalf, J., 2020. High Tech, [32] Simon, H., 1957. A behavioral model of High Risk: Tech Ethics Lessons for the rational choice. Models of man, social and COVID-19 Pandemic rational: Mathematical essays on rational Response. Patterns, 1(7), p.100102. human behavior in a social setting, pp.241- [23] Marda, Vidushi and Shazeda Ahmed. 260. 2021. ”Emotional Entanglement: China’s [33] Urbi, Jaden. 2018. "Some transgender emotion recognition market and its drivers are being kicked off Uber’s app." implications for human rights.” Article 19. /www. cnbc.com/2018/08/08/transgender- [24] Narayanan, A., 2018, February. uber-driver-suspended-tech-oversight- Translation tutorial: 21 fairness definitions facial-recognition.html and their politics. In Proc. Conf. Fairness [34] Watkins, E.A., 2020, October. Took a Pic Accountability Transp., New York, USA and Got Declined, Vexed and Perplexed: (Vol. 1170). Facial Recognition in Algorithmic [25] Orlikowski, W.J. and Gash, D.C., 1994. Management. In Conference Companion Technological frames: making sense of Publication of the 2020 on Computer information technology in Supported Cooperative Work and Social organizations. ACM Transactions on Computing (pp. 177-182). Information Systems, 12(2), pp.174-207. [35] Woodruff, A., Fox, S.E., Rousso- [26] Orlikowski, W.J. and Scott, S.V., 2008. 10 Schindler, S. and Warshaw, J., 2018, sociomateriality: challenging the April. A qualitative exploration of separation of technology, work and perceptions of algorithmic fairness. In organization. Academy of Management Proceedings of the 2018 CHI Conference annals, 2(1), pp.433-474. on Human Factors in Computing Systems [27] Qadri, R., 2020, February. Algorithmized (pp. 1-14) but not Atomized? How Digital Platforms Engender New Forms of Worker Solidarity in Jakarta. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 144-144). [28] Raji, I.D. and Buolamwini, J., 2019, January. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 429-435) [29] Rosenblat, A. and Stark, L., 2016. Algorithmic labor and information asymmetries: A case study of Uber’s drivers. International journal of communication, 10, p.27. [30] Selbst, A.D., boyd, D., Friedler, S.A., Venkatasubramanian, S. and Vertesi, J., 2019, January. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 59- 68). [31] Scheuerman, M.K., Paul, J.M. and Brubaker, J.R., 2019. How computers see gender: An evaluation of gender classification in commercial facial analysis services. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), pp.1-33.