=Paper=
{{Paper
|id=Vol-2884/paper_122
|storemode=property
|title=Artificial Intelligence and Resource Allocation in Healthcare: The Process-Outcome
Divide in Perspectives on Moral Decision-Making
|pdfUrl=https://ceur-ws.org/Vol-2884/paper_122.pdf
|volume=Vol-2884
|authors=Sonia Jawaid Shaikh
}}
==Artificial Intelligence and Resource Allocation in Healthcare: The Process-Outcome
Divide in Perspectives on Moral Decision-Making==
Artificial Intelligence and Resource Allocation in Health Care: The Pro- cess-Outcome Divide in Perspectives on Moral Decision-Making Sonia Jawaid Shaikh Annenberg School for Communication, University of Pennsylvania 3620 Walnut St, Philadelphia, PA 19104, United States sjshaikh@asc.upenn.edu Abstract but a set of interconnected socio-economic variables. Fur- Pandemics or health emergencies create situations where the thermore, this type of decision-making has a moral dimen- demand for clinical resources greatly exceeds the supply sion as it requires humans (e.g. physicians, hospital admin- leading to health providers making morally complex resource istrators, nurses, etc.) to make tradeoffs such as those in- allocation decisions. To help with these types of decisions, volving utilitarian and egalitarian parameters which exacer- health care providers are increasingly deploying artificial in- telligence (AI)-enabled intelligent decision support systems. bates its complexity (Robert et al. 2020). In a pandemic such This paper presents a synopsis of the current debate on these as COVID-19, the demand for resources (e.g. beds, ventila- AI-enabled tools to suggest that the existing commentary is tors, ICU units, etc.) is many times greater than their supply outcome-centric i.e. it presents competing narratives where which complicates the problem of allocation. This situation AI is described as a cause for problematic or solution-ori- forces health care providers to set up a variety of triage pro- ented abstract and material outcomes. Human decision-mak- ing processes such as empathy, intuition, and structural and tocols to determine if and to what extent someone qualifies agentic knowledge that go into making moral decisions in for one or more resources. clinical settings are largely ignored in this discussion. It is One of the ways providers are making resource allocation argued here that this process-outcome divide in our under- decisions to deal with the COVID-19 pandemic is to use AI- standing of moral decision-making can prevent us from tak- enabled decision support systems. These systems can use ing the long view on consequences such as conflicted intui- tion, moral outsourcing and deskilling, and provider-patient patient’s electronic health records (EHR) and/or clinical relationships that can emerge from the long-term deployment measurements (e.g. blood pressure, fever, health conditions) of technology devoid of human processes. To preempt some to make diagnoses and prognoses (Lamanna and Byrne of these effects and create improved systems, researchers, 2018; Medlej 2018) which can be subsequently used to providers, designers, and policymakers should bridge the make decisions about the level of care and allocation of re- process-outcome divide by moving toward human-centered resource allocation AI systems. Recommendations on bring- sources to patients (Debnath et al. 2020). ing the human-centered perspective to the development of AI Since the technology can affect people’s lives in multiple systems are discussed in this paper. ways, it renders itself as a problem concerning social good, and thus deserves our attention. Furthermore, as it becomes more sophisticated and widely deployed, it needs to be un- Introduction derstood now to preempt any negative consequences on hu- Resource allocation is a type of decision-making which in- man decision-making practices and to generate helpful pol- volves the procurement, assignment, and distribution of re- icies and guidelines. To enhance our understanding on this sources between actors. Typically, decision-making pertain- matter, this paper presents a brief synopsis of the debates ing to resource allocation is considered to be a difficult en- surrounding the use of AI to help humans make moral deci- terprise because access or ownership of resources is com- sions pertaining to resource allocation during health crises. petitive and can not only affect individuals and group health, The core argument made here is that the current debate on AAAI Fall 2020 Symposium on AI for Social Good. Copyright © 2020 for this paper by its authors. Use permitted under Crea- tive Commons License Attribution 4.0 International (CC BY 4.0). the deployment of AI-enabled decision support systems and therefore, can be fair and less biased resource allocators comprises competing narratives that are mostly outcome- (Shea et al. 2020) . This perspective frames AI-enabled in- centric i.e. they focus on the abstract (e.g. fairness) and ma- telligent decision support systems as solutions that can am- terial (e.g. saving costs) outcomes AI technology can yield. plify fairness. By emphasizing upon these outcomes, we fundamentally ig- However, a competing narrative suggests the opposite. It nore the human decision-making processes that health care argues that AI systems can create problems as they may re- providers also use in addition to established clinical guide- flect existing systemic biases and thus, are more likely to lines when allocating resources. The disregard of human make unfair appraisals exacerbating inequality between dif- processes of decision-making in the pursuit of AI’s use in ferent racial and socioeconomic groups (Röösli et al. 2020). resource allocation might have long-term consequences on For instance, a study found that a commercial algorithm fac- the design of technology and providers’ abilities to make de- tored in health care costs more heavily compared to physio- cisions. My hope is that this paper will provide insights to logical symptoms of illness which led to sicker Black pa- researchers, designers, and policymakers to bridge process- tients being provided with fewer services (Obermeyer et al. outcome divide to implement more human-centered tech- 2019). In the case of a pandemic such as COVID-19, an AI- nology for moral decision-making in clinical settings. based tool might use underlying health conditions (e.g. obe- To support these arguments, the following sections begin sity, heart problems) or disability to predict a lower chance by presenting a brief synthesis of current perspectives on us- of recovery for a patient suffering from virus-induced com- ing AI for resource allocation during COVID-19. We ob- plications which would affect the likelihood of them receiv- serve that the predominant ideas on making resource alloca- ing a hospital bed. An AI system such as this is likely to tion decisions reflect a process-outcome divide where the perpetuate unfair outcomes by giving people who suffer current debate is heavily tilted toward framing AI as an en- from ailments due to socioeconomic inequalities a lower tity that can create outcomes which are either problematic or chance of accessing a health resource. solution-oriented in nature. This approach disregards human In addition to fairness, another theme concerning the use processes such as empathy, intuition, and structural and of AI tools in health care resource allocation pertains to the agentic knowledge that health care providers rely upon to effects of AI’s computational prowess on various abstract make resource allocation decisions. This is followed by not- and material outcomes. It has been argued that unlike hu- ing the possible effects of long-term deployment of out- mans, machines have extraordinary computing and infor- come-centric technology on human decision-making such as mation processing power which allows them to gather, ana- disruptions in intuitive processes, moral outsourcing and de- lyze, and interpret data quickly which ultimately helps with skilling, and relationships with patients. It is recommended timely diagnosis and provision of care (Shea et al. 2020). here that the process-outcome divide must be bridged by This leads to not only determining who gets resources, but creating human-centered AI systems which can preempt ad- helps save health care costs, effort, and time (Adly et al. verse consequences for providers and patients. Human-cen- 2020). A differential take suggests that AI’s very computa- tered AI systems in health care can be incorporated by hav- tional reach and scope can backfire if a corrupt algorithm ing providers work as co-developers of technology, building incorrectly diagnoses or distributes resources to many peo- in-house AI capabilities, and developing regulations per- ple in a very short amount of time. taining to the use of AI in health care settings. It is evident that current perspectives discussed above on deploying AI to make resource allocation decisions are con- cerned with tackling abstract (e.g. fairness) or material out- AI and Resource Allocation: An Outcome- puts (e.g. costs). Such framing is not only outcome-centric, Centric Approach to Decision-Making but overly simplifies complex phenomena concerning moral decision-making such as resource allocation. The applica- Fairness is a recurrent theme in the discussion on the use of tion of AI-enabled technology to allocate resources occurs AI to make moral decisions. The concept of fairness when in human contexts where decision-makers use distinct and applied to any resource allocation process refers to the in- identifiable processes to make moral decisions. An AI-ena- stance whereby a decision results in reduction of biases that bled system neither accounts nor substitutes for these pro- may prioritize one group over another or where it increases cesses and therefore, the need for identifying and discussing equity between different stakeholders. The goal of any deci- human processes becomes ever more important to not only sion-maker, therefore, is to increase fairness as an outcome. fill theoretical voids, but also affect how we design and im- However, the debate on the use of AI with reference to fair- plement technology. ness is a debate comprising competing narratives. Some ar- gue that AI-based tools in health care resource allocation during COVID-19 pandemic are useful because machines are driven by complex logic and predetermined parameters, The Role of Human Decision-Making Pro- Empathetic concern plays are a very important role in iden- tifying biases in any system, policy, and practice. For in- cesses in Diagnoses and Resource Allocation stance, when we reflect on a process empathetically, we are Clinical decision-making in the realm of resource allocation more likely to understand how it affects people which can is a multifaceted activity which requires providers to use therefore allow us to intervene to help and make changes for complex decision-making processes in addition to prede- them (Batson 2016). To illustrate this further, let’s recon- fined scientific and well-established protocols. The sections sider the findings on the use of algorithm which led to Black below present a brief discussion on the distinct processes patients being given access to fewer resources despite them which are regularly employed in moral decision-making by being sicker since the tool factored the costs of health care health care providers but are yet to be explored within the more heavily when allocating services (Obermeyer et al. context of resource allocation especially during a pandemic. 2019). If the AI program were developed using an empa- thetic approach, then it might have accounted for the fact Intuition that Blacks on average have lower income than White pa- Intuition refers to an information processing mode which tients and thus, are less likely to spend on hospital services lacks conscious reasoning but incorporates affective and despite having more physiological symptoms. This shows cognitive elements to make decisions (Sinclare and Ash- that the design and use of technology to make moral deci- kanasy 2005). Doctors and caregivers often use their intui- sions without any concern for empathetic human processes tions as a part of their clinical decision-making processes in can reflect in the outcomes technology produces. addition to using the guidelines and medical procedure (Van While intuition, structural and agentic knowledge, and de Brink et al. 2019; Rew 2000). Evidence for the use of empathetic concern are important processes that help and intuition or ‘gut feelings’ to allocate resources is evidenced guide moral decision-making in clinical (and non-clinical across other cultures (Le Reste et al. 2013; Ruzca et al. settings), they are largely ignored in the debate on the use of 2020). Intuitive decision-making process can affect how AI to allocate resources since competing narratives are health care providers make diagnostic recommendations mostly focused on the outcomes the technology produces. which lead to allocation of services. For instance, findings This raises the question of what the deployment of AI means from a large-scale study on has shown that doctors’ senti- for human processes in decision-making. To help research- ments affected the number of tests their patients received in ers, administrators, and policy makers engaged in long-term an ICU setting (Ghassemi et al. 2018). This suggests that planning and thinking, I present some reflections on the pos- providers’ intuition plays an important role in making diag- sible effects of AI-enabled tools devoid of elements con- nostic and subsequently resource allocation decisions. cerning human processes. Structural and Agentic Knowledge Deployment of AI-Enabled Tools for Resource Health providers often have a deep understanding of struc- Allocation: A Note on Potential Consequences tural and agentic variables that underlie and affect their day- to-day operations. For instance, they may know how their The focus of this paper is on moral decisions pertaining to hospital’s location affects resource procurement, patient ar- resource allocation especially within pandemic-related set- rival and admission. Providers are also more are more likely tings. Since many people compete for limited resources and to be aware of differences in hospital personnel personality there is little time to make decisions; it is tempting, and in types, work ethics, interpersonal relationships, cultural val- some cases, advantageous to apply AI-based tools. How- ues and political beliefs, bureaucratic procedures and ad- ever, when humans use AI-enabled tools to make moral de- ministrative conduct, equipment issues, etc. Together, this cisions, their internal decision-making processes are likely conscious and subconscious knowledge of structural and to be affected or influenced by technology. This could affect agentic factors can guide providers’ moral decision-making how providers assess, analyze, and treat patients. However, and thus, how they allocate resources (Lemieux-Charles et short-term solutions can potentially have long-term unin- al. 1993). The current technology can hardly substitute this tended and unwanted effects. The following sections note knowledge as it exists beyond the purview of an AI-enabled some of the consequences on providers’ decision-making tool. processes concerning intuition, knowledge, and empathetic concern which may occur as a function of long-term deploy- Empathetic Concern ment of AI. Many diagnostic and moral decisions are driven by empa- thetic concern for others (Selph et al. 2008). Hence, it is no Disrupted and Conflicted Intuition surprise that care providers often take an empathetic ap- As AI continues to be incorporated in moral decision-mak- proach to identify illnesses or allocate specific resources. ing, providers will have to divide their attention between the AI’s recommendations and their own intuitive judgement decisions is problematic because it may lead to moral de- especially if they have different or opposing ideas. They will skilling. face the added tension of determining tradeoffs between the machine and their own morals (Grote and Berens 2020) es- Health Care Provider-Patient Relationships pecially when applied to moral decision-making such as re- A patient’s relationship with their health provider is based source allocation. Such scenarios will require additional hu- on several factors including the providers’ abilities to empa- man cognitive input and will be more likely to interrupt the thize and make decisions that show off their competence, intuitive approaches doctors already use to make allocate expertise, and clinical prowess (Larson and Yao 2005). With scarce resources decisions. the incorporation of AI in clinical settings, some have ar- It is arguable that the addition of AI may could facilitate gued that the use of AI could augment providers’ compe- the doctors’ decision-making processes by sharing the cog- tency by helping them think about alternative diagnostic op- nitive burden pertaining to diagnostic evaluation. However, tions or providing them with feedback on their performance. it is important to note that the process of moral decision- These factors could amplify the trust patients place in phy- making comprises more than a more than a mechanical di- sicians (Nundy et al. 2019). This could be one of the out- agnostic endeavor. It also includes how users react to and comes of AI application in a regular clinical setting. How- accept or react suggestions from AI. Prior research has ever, pandemics with high-mortality rates where resource shown people’s tendency to both accept and reject advice shortages affect day-to-day functioning of hospitals and from algorithms (Dietvorst et al. 2014; Logg et al. 2019) and clinics, the use of AI to make critical diagnostic and subse- therefore, it is likely that such judgements pertaining to the quently allocation decisions could theoretically be viewed recommendations made by AI will also be made by doctors differently by patients. Reliance on AI could lead patients in conjunction with their own intuitive responses. and their families to question providers’ competency to and Disrupted and conflicted intuition is likely to affect the treat and care for patients along with their ability to be fair. internal moral compass decision-makers use to organize Patient doubts on providers’ competence could amplify if their worlds. It is also going to reflect in how they allocate the technology commits errors or is found to be biased resources where some may exclusively rely on technology (Nundy et al. 2019). Thus, we can imagine that situations to mitigate their internal tensions and others may develop such as these could easily erode the trust and belief patients their own course of action. Although it is possible that pro- and their families place in health care providers. viders use a combination where they select when to choose intuitive or machine judgement to make decisions, this will be a hard skill to learn, and thus, difficult to use especially Bridging the Process-Outcome Divide: To- in emergencies where decisions have to be made quickly. ward the Development of Human-Centered AI Resource Allocation Systems in Health Care Moral Outsourcing and Deskilling Assigning and rationing resources between people is an is- Now that we have identified that there is a process-outcome sue that is directly tied to the issues of ethics and morality. divide on how moral decision-making is conceptualized, Making moral decisions can be a difficult and distressing discussed, and applied within clinical settings, the next process because it typically involves trade-offs concerning question is what we can do about it? The bridging of the self-interests and group needs, personal and cultural values, process-outcome divide in moral decision-making can occur and immediate and future rewards within the context of with the development of human-centered resource alloca- health care (McCarthy and Deady, 2008; Wright et al. tion AI systems as applied to clinical settings. A human-cen- 1997). As such, it requires that a decision-maker gives them tered approach to AI development incorporates the perspec- careful attention, thought, deliberation, along with engaging tives and processes of users that use intelligent systems (see in interactions with others. Moral decision-making is thus, a Xu 2019). Thus, AI systems using a human-centered ap- skill that is learned over time and with consistent practice by proach are more likely to create synergy between both hu- care providers. The deployment of AI-based tools to help man decision-making processes and machine outcomes to with moral decision-making creates an increased risk for positively affect and amplify both physician and patient wel- moral outsourcing i.e. the tendency to allow machines to fare. That being said, the challenge remains as to how we make moral decisions for us (see Danaher 2015). This is es- can achieve human-centered design in the deployment and pecially likely due to the human bias where machines are development of AI tools. often considered fairer (Lee 2018). Thus, while the use of To overcome this challenge, we first need to further un- machine-based tools to help us make difficult decisions is pack the process-outcome divide in the context of moral de- inevitable, an over reliance on AI to make ethical and moral cision-making pertaining to resource allocation within a pandemic (or non-pandemic) setting. The “process” in the process-outcome divide refers to the human decision-mak- To creative an iterative loop between development and ing processes such as empathy, intuition, and structural and deployment, the lines between and users of technology have agentic knowledge which health care providers use (in addi- to be blurred. While developers of technology can under- tion to pre-determined clinical protocols and guidelines) to stand its various technical aspects and have the requisite make diagnostic judgements and allocation decisions. knowledge and skills to build it; the users can often envisage While the “outcome” refers to the machine-driven or related its effects and uses more deeply due to their day-to-day ex- consequences or functionality such as maximizing fairness perience, exposure to patients’ needs, and structural and per- or computational capacity. Hence, process-outcome divide sonnel-based issues in clinical settings. To understand this by nature can be said to also imply a human-machine divide. further, let us imagine that an AI program is designed to help Note that here the term human-machine divide is not meant decide if patient gets a bed during a pandemic. The program in the same way as its prior use in the context of technical conducts a risk assessment of the severity of patient’s con- features of the machine and how they are informed by hu- dition by assigning scores on a pre-determined set of factors. man biology (e.g. neurons) (see Warwick 2015). The focus One of the factors relates to prior health condition where the here is on how the process-outcome divide in perspectives AI assigns a score in case a patient has any (e.g. heart prob- on moral decision-making pertaining to resource allocation lem). However, health care providers may know via their reflect the split between human processes of decision-mak- day-to-day experiences that it is likely for a patient without ing and machine-generated outcomes pertaining to resource a prior health record in the hospital’s system and yet having allocation decisions. I argue that this divide could poten- an underlying condition to arrive in the emergency depart- tially be addressed by moving toward human-centered AI ment. The patient might be unaccompanied and unable to systems which will require recognizing and iteratively inte- report their medical history due to physical ailment or lan- grating human processes in the development, deployment, guage barrier. They could also be unaware of their underly- management, and regulation of AI-enabled systems. To this ing medical condition. In such a scenario, the use of an AI end, the following sections present some recommendations program that determines if a resource (e.g. bed.) can be al- on how human processes can be weaved into the develop- located to a patient based on the above-mentioned criterion ment of AI systems. may not be the most appropriate option. If providers and de- velopers work in an interactive and iterative fashion, then Health Care Providers as Co-Developers of AI these observations could be passed on to the developers who Technology may be able to account for these issues i.e. a lack of prior medical record, or language barrier, or being unaccompa- An admittedly simplified way of understanding how AI-en- nied along with obvious severity in symptoms when as- abled technology is scaled is to reflect on two stages: devel- sessing risk. An AI program could then use a different scor- opment and deployment. More often than not, these tools are ing system which accounts for these variables to allocate developed either independently (i.e. by manufacturer/com- beds. Thus, continual integration of human processes via pany or within academic settings) or in some consultation relevant updates and modifications is more advantageous with health care providers. Once developed, they may be than one-time testing or multi-phase testing with a pre-de- pitched to various health care providers where the technol- termined end. ogy is customized to their needs. Sometimes, the technology Prior research has shown that users and developers can is rolled out in phases where it is tested on a smaller level co-create technology by contributing their differing exper- and subsequently expanded to include more patients or units tise in a process called cooperative prototyping (Bødker and (Gago et al. 2005). Thus, by and large, development of AI Grønbæk 1991). However, the rapidly changing health en- technology is followed by its deployment with lagged or vironments require us to leap from the cooperative prototyp- punctuated feedback from the user to the developer This ing approach to iterative cooperative development and man- practice indicates a bifurcation between the developers and agement of technology. When health care providers will act users (here: health care providers). as co-developers of technology, it will allow them to fuse This approach to the scaling of an AI-enabled decisions human processes (e.g. empathy, intuition) involved in moral support system may seem natural and functional. However, decision-making to build and update AI systems. I argue that for including human processes of decision-mak- That being said, it must be mentioned that from normative ing in how technology is used; it is best to see development and prescriptive perspectives, the extension of providers as and deployment linked together in an interactive and itera- co-developers of technology may sound like an appealing tive process where they inform each other. This is particu- and useful idea. However, from a practical point of view, larly important in health care settings where the availability this may prove to be a difficult enterprise because it would of and access to resources vary and the environment (e.g. require health care facilities to dedicate personnel and their infection and mortality rates, deaths, policy, information) time toward the development of such systems. Presumably, change rapidly and often unpredictably. this thought may be a deterrent for some to adopt such measures and protocols. However, I argue that in the long settings or outsource them to the third-party or original de- run, this will be a small cost to bear. A team of health pro- velopers within a short period of time. fessionals who are dedicated to testing AI-enabled intelli- gent decision support systems can only better the technology Developing AI-Specific Regulations, Protocols, which in turn will produce superior outcomes and decrease and Ethical Guidelines and Educating Providers the cost of day-to-day operations as well as reduce risks as- It was argued above that one of the long-term consequences sociated with poorly designed systems. Consider this with of using AI to make moral decisions could manifest as hu- reference to the latest developments in space science. IBM mans putting in more cognitive effort and challenging their developed a robot called CIMON which was tested for its intuition especially if personal judgement were at odds with efficacy by an astronaut aboard the International Space Sta- the advice given by an AI program. It was also suggested tion (ISS) (CIMON brings AI to the International Space Sta- that an over-reliance on intelligent systems could lead to tion n.d.). The feedback and testing allowed for a new and moral outsourcing and deskilling. To preempt such scenar- upgraded robot CIMON-2 to be sent to the ISS (IBM 2020). ios, health care providers will need to create specific proto- The developers and user (i.e. astronaut) of the space robot cols, regulatory, and ethical guidelines to regulate the use of played important roles in both the development and deploy- AI within their premises. These guidelines and directives ment of the technology before it could be used in a high- will need to specify whose judgment—i.e. human or AI— stakes environment such as a space mission. Health care set- will be considered as the final say when making a diagnostic tings should be treated no less than a space mission as they or allocation decision. These guidelines will also have to are high-stakes and expense-laden environments which af- delineate parameters of assigning culpability and responsi- fect socioeconomic and mortal outcomes for billions of peo- bility if choices made by a human in conjunction with or ple around the world. It logically follows then AI-enabled against the advice of AI results in adverse outcomes. These decision support systems within health care should not only regulations will help providers understand their roles in be tested regularly but be informed by the very users who moral decision-making and allow them to continue sharpen- employ it making critical resource allocation decisions. ing their skills when it comes to moral decision-making in the presence of AI. Additionally, medical schools and educational programs Developing AI-Focused In-House Capacities will also need to train providers and students on how to in- As decision-makers, people are managed by others such as teract with AI, evaluate its judgement, and effects on human human resource departments, administrative procedures or decision-making. Together, these practices will allow pro- protocols, upper managements, etc. These institutional ac- viders to better understand AI, its management, and relation- tors and protocols manage human activities, solve issues, ship with humans within clinical settings. and recommend further actions. Management also extends to medical equipment as hospitals and clinics often have technical staff or support teams on site or procured via third- Conclusion party contracts. However, such staff or administrative units The core argument presented in this paper is that the discus- are often missing when it comes to overseeing AI-enabled sion on AI decision support tools used in moral decision- technology. Many health care settings deploy AI with little making such as resource allocation within clinical settings to no oversight of these systems since their management re- provides competing narratives which delineate the pros and quires particular skill sets. Increasing sophistication of AI- cons of AI in terms of the material and abstract outcomes enabled systems and their authority to pass judgement (as- the technology produces. Such perspectives distract us from sign risks to patients/calculate scores) makes them not only focusing on the role human decision-making processes such tools but also decision-making actors to some extent. As ac- as empathy, intuition, and structural and agentic knowledge tors and tools, they too, need supervision. Therefore, health play in resource allocation decisions. This scenario reflects care administrators will need to develop in-house expertise process-outcome divide in the current perspectives on moral and create departments that are specifically dedicated to the decision-making within health care settings. If these human monitoring, modification, and management of AI-enabled processes are disregarded while AI is used to make moral systems or embodied intelligent assistants such as robots decisions, it may result in long-term consequences such as which have increasingly become a part of health care set- conflicted intuition, moral outsourcing and deskilling, and tings. poor patient-provider relationships. To preempt some of Such an endeavor would have several benefits as it would: these consequences and create better health outcomes; re- a. allow the integration of providers’ perspectives within the searchers, developers and policymakers seriously consider AI system, b. identify any issues quickly, and c. make mod- the importance of human processes along with machine- ifications to the system potentially within the clinical driven outcomes. One of the ways we can bridge the process-outcome divide is to create human-centered AI sys- Ghassemi, M. M., Al-Hanai, T., Raffa, J. D., Mark, R. G., Nemati, tems specific to health care. To this end, some recommen- S., and Chokshi, F. H. 2018. How is the Doctor Feeling? ICU Pro- vider Sentiment Is Associated with Diagnostic Imaging Utiliza- dations are proposed: a. health care providers work with de- tion. 2018 40th Annual International Conference of the IEEE En- velopers of technology as co-developers in an in an iterative gineering In Medicine And Biology Society (EMBC), Honolulu. and interactive fashion, b. health care facilities should de- 4058-4064. IEEE. doi.org/10.1109/EMBC.2018.8513325. velop in-house AI expertise and create a specific department Grote, T., & Berens, P. 2020. On the Ethics of Algorithmic Deci- to manage, regulate, and modify the technology, and c. reg- sion-Making in Healthcare. Journal of Medical Ethics. 46(3): 205– ulatory protocols and guidelines specific to the use of AI in 211. doi.org/10.1136/medethics-2019-105586 making moral decisions should be developed. These guide- Lamanna, C., and Byrne, L. (2018). Should Artificial Intelligence lines should be able to specify how and when humans should Augment Medical Decision Making? The Case for an Autonomy Algorithm. AMA Journal of Ethics, 20(9): E902–E910. override AI decisions. It should also delineate rules on cul- doi.org/10.1001/amajethics.2018.902 pability should a decision made in conjunction or against AI Larson, E. B., & Yao, X. 2005. Clinical Empathy as Emotional La- advice produce adverse effects. Furthermore, providers and bor in the Patient-Physician Relationship. The Journal of the Amer- students should be trained on understanding the effects of ican Medical Association. 293(9): 1100–1106. AI on their decision-making. Together, these endeavors doi.org/10.1001/jama.293.9.1100 could help with taking and implementing a broader and Lemieux-Charles, L., Meslin, E. M., Aird, C., Baker, R., and Leatt, more human-centered perspective on the use of AI in health P. 1993. Ethical Issues Faced by Clinician/managers in Resource- care to advance social good. Allocation Decisions. Hospital & Health Services Administration 38(2): 267–85. Doi.org Le Reste, J.-Y., Coppens, M., Barais, M., Nabbe, P., Le Floch, B., References Chiron, B., Dinant, G. J., Berkhout, C., Stolper, E., & Barraine, P. 2013. The Transculturality of “Gut Feelings”. Results from a Adly, A. S., Adly, A. S., and Adly, M. S. 2020. Approaches Based French Delphi Consensus Survey. The European Journal of Gen- on Artificial Intelligence and the Internet of Intelligent Things to eral Practice. 19(4): 237–243. Prevent the Spread of COVID-19: Scoping Review. Journal of doi.org/10.3109/13814788.2013.779662 Medical Internet Research. 22(8): e19104. doi.org/10.2196/19104 Logg, J. M., Minson, J. A., and Moore, D. A. 2019. Algorithm Ap- Batson, C. D. 2016. Empathy and Altruism. In The Oxford Hand- preciation: People Prefer Algorithmic to Human Judgment. Organ- book of Hypo-egoic Phenomena, edited by K. W. Brown, and M. izational Behavior and Human Decision Processes. 151: 90–103. R. Leary, 161-174. New York: Oxford University Press doi.org/10.1016/j.obhdp.2018.12.005 Bødker, Susanne, and Grønbæk, K. 1991. Cooperative Prototyp- McCarthy, J., & Deady, R. 2008. Moral Distress Reconsidered. ing: Users and Designers in Mutual Activity. International Journal Nursing Ethics. 15(2): 254–262. of Man-Machine Studies. 34(3): 453–78. doi.org/10.1016/0020- doi.org/10.1177/0969733007086023 7373(91)90030-B Medlej, K. (2018). Calculated Decisions: Sequential Organ Failure CIMON Brings AI to the International Space Station. Accessed Assessment (SOFA) Score. Emergency Medicine Practice. 20(10): October 15, 2020. Retrieved from https://www.ibm.com/thought- CD1–CD2. leadership/innovation_explanations/article/cimon-ai-in- Nundy, Shantanu, Tara Montgomery, and Robert M. Wachter. space.html. 2019. Promoting Trust Between Patients and Physicians in the Era CIMON-2 Masters Its Debut on the International Space Station, of Artificial Intelligence. The Journal of the American Medical As- IBM. April 15, 2020. Retrieved from https://news- sociation. doi.org/10.1001/jama.2018.20563 room.ibm.com/2020-04-15-CIMON-2-Masters-Its-Debut-on-the- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. 2019. International-Space-Station. Dissecting Racial Bias in an Algorithm Used to Manage the Health Danaher, J. 2018. Toward an Ethics of AI assistants: An Initial of Populations. Science. 366(6464): 447–453. doi.org/10.1126/sci- Framework. Philosophy & Technology. 31(4): 629-653. ence.aax2342 doi.org/10.1007/s13347-018-0317-3 Rew, L. (2000). Acknowledging Intuition in Clinical Decision Debnath, S., Barnaby, D. P., Coppa, K., Makhnevich, A., Kim, E. Making. Journal of Holistic Nursing. 18(2): 94–113. J., Chatterjee, S., Tóth, V., Levy, T. J., Paradis, M. D., Cohen, S. doi.org/10.1177/089801010001800202 L., Hirsch, J. S., Zanos, T. P., & The Northwell COVID-19 Re- Robert, R., Kentish-Barnes, N., Boyer, A., Laurent, A., Azoulay, search Consortium. 2020. Machine Learning to Assist Clinical De- E., & Reignier, J. 2020. Ethical Dilemmas Due to the COVID-19 cision-Making During the COVID-19 Pandemic. Bioelectronic Pandemic. Annals of Intensive Care. 10(1): 84. Medicine. 6(1): 1–8. doi.org/10.11186.s4234-020-00050-8 doi.org/10.1186/s13613-020-00702-7 Dietvorst, B. J., Simmons, J. P., and Massey, C. 2015. Algorithm Röösli, E., & Rice, B. (2020). Bias at Warp Speed: How AI may Aversion: People Erroneously Avoid Algorithms after Seeing Contribute to the Disparities Gap in the Time of COVID-19. Jour- Them Err. Journal of Experimental Psychology. General 144 (1): nal of the American Medical Informatics Association. 1–3. 114–26. doi.org/10.1037/xge0000033 doi.org/10.1093/jamia/ocaa210 Gago, P., Santos, M. F., Silva, A., Cortez, P., Neves, J., and Gomes, Selph, R. B., Shiang, J., Engelberg, R., Curtis, J. R., & White, D. L. 2005. INTCare: A Knowledge Discovery Based Intelligent De- B. (2008). Empathy and Life Support Decisions in Intensive Care cision Support System for Intensive Care Medicine. Journal of De- Units. Journal of General Internal Medicine, 23(9): 1311–1317. cision Systems. 14 (3): 241–59. doi.org/10.3166/jds.14.241-259 doi.org/10.1007/s11606-008-0643-8 Shea, G. P., Laudansky, K. K., & Solomon, C. A. 2020, March 27). Triage in a Pandemic: Can AI Help Ration Access to Care? Re- trieved from: https://knowledge.wharton.upenn.edu/article/triage- in-a-pandemic-can-ai-help-ration-access-to-care/ Sinclair, M., & Ashkanasy, N. M. (2005). Intuition: Myth or a De- cision-Making Tool? Management Learning. 36(3): 353–370. doi.org/10.1177/1350507605055351 Van de Brink, N., Holbrechts, B., Brand, P. L. P. P., Stolper, E. C. F., and Van Royen, P. 2019. Role of Intuitive Knowledge in the Diagnostic Reasoning of Hospital Specialists: A Focus Group Study. BMJ Open. 9 (1): e022724. doi.org/10.1136/bmjopen-2018- 022724 Warwick, K. 2015. The Disappearing Human-Machine Divide. In Beyond Artificial Intelligence, edited by J. Romport, and E. Zackova, J. Kelemen, 1–10. Switzerland: Springer Wright, F., Cohen, S., & Caroselli, C. 1997. Diverse Decisions. How Culture Affects Ethical Decision Making. Critical Care Nursing Clinics of North America. (9)1: 63–74. doi.org/10.1016/S0899-5885(18)30292-2 Xu, W. 2019. Toward Human-Centered AI: A Perspective from Human-Computer Interaction. interactions. 26(4): 42–46. doi.org/10.1145/3328485