<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Intuition: Myth or a De-
cision-Making Tool? Management Learning.</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Artificial Intelligence and Resource Allocation in Health Care: The Pro- cess-Outcome Divide in Perspectives on Moral Decision-Making</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sonia Jawaid Shaikh</string-name>
          <email>sjshaikh@asc.upenn.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Annenberg School for Communication, University of Pennsylvania 3620</institution>
          <addr-line>Walnut St, Philadelphia, PA 19104</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>36</volume>
      <issue>3</issue>
      <fpage>1</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>Pandemics or health emergencies create situations where the demand for clinical resources greatly exceeds the supply leading to health providers making morally complex resource allocation decisions. To help with these types of decisions, health care providers are increasingly deploying artificial intelligence (AI)-enabled intelligent decision support systems. This paper presents a synopsis of the current debate on these AI-enabled tools to suggest that the existing commentary is outcome-centric i.e. it presents competing narratives where AI is described as a cause for problematic or solution-oriented abstract and material outcomes. Human decision-making processes such as empathy, intuition, and structural and agentic knowledge that go into making moral decisions in clinical settings are largely ignored in this discussion. It is argued here that this process-outcome divide in our understanding of moral decision-making can prevent us from taking the long view on consequences such as conflicted intuition, moral outsourcing and deskilling, and provider-patient relationships that can emerge from the long-term deployment of technology devoid of human processes. To preempt some of these effects and create improved systems, researchers, providers, designers, and policymakers should bridge the process-outcome divide by moving toward human-centered resource allocation AI systems. Recommendations on bringing the human-centered perspective to the development of AI systems are discussed in this paper.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Resource allocation is a type of decision-making which
involves the procurement, assignment, and distribution of
resources between actors. Typically, decision-making
pertaining to resource allocation is considered to be a difficult
enterprise because access or ownership of resources is
competitive and can not only affect individuals and group health,
but a set of interconnected socio-economic variables.
Furthermore, this type of decision-making has a moral
dimension as it requires humans (e.g. physicians, hospital
administrators, nurses, etc.) to make tradeoffs such as those
involving utilitarian and egalitarian parameters which
exacerbates its comp
        <xref ref-type="bibr" rid="ref6">lexity (Robert et al. 2020</xref>
        ). In a pandemic such
as COVID-19, the demand for resources (e.g. beds,
ventilators, ICU units, etc.) is many times greater than their supply
which complicates the problem of allocation. This situation
forces health care providers to set up a variety of triage
protocols to determine if and to what extent someone qualifies
for one or more resources.
      </p>
      <p>
        One of the ways providers are making resource allocation
decisions to deal with the COVID-19 pandemic is to use
AIenabled decision support systems. These systems can use
patient’s electronic health records (EHR) and/or clinical
measurements (e.g. blood pressure, fever, health conditions)
to make diagnoses and prognoses (
        <xref ref-type="bibr" rid="ref6">Lamanna and Byrne
2018</xref>
        ; Med
        <xref ref-type="bibr" rid="ref6">lej 2018</xref>
        ) which can be subsequently used to
make decisions about the level of care and allocation of
resources to patients (Debnath et a
        <xref ref-type="bibr" rid="ref6">l. 2020</xref>
        ).
      </p>
      <p>Since the technology can affect people’s lives in multiple
ways, it renders itself as a problem concerning social good,
and thus deserves our attention. Furthermore, as it becomes
more sophisticated and widely deployed, it needs to be
understood now to preempt any negative consequences on
human decision-making practices and to generate helpful
policies and guidelines. To enhance our understanding on this
matter, this paper presents a brief synopsis of the debates
surrounding the use of AI to help humans make moral
decisions pertaining to resource allocation during health crises.
The core argument made here is that the current debate on
the deployment of AI-enabled decision support systems
comprises competing narratives that are mostly
outcomecentric i.e. they focus on the abstract (e.g. fairness) and
material (e.g. saving costs) outcomes AI technology can yield.
By emphasizing upon these outcomes, we fundamentally
ignore the human decision-making processes that health care
providers also use in addition to established clinical
guidelines when allocating resources. The disregard of human
processes of decision-making in the pursuit of AI’s use in
resource allocation might have long-term consequences on
the design of technology and providers’ abilities to make
decisions. My hope is that this paper will provide insights to
researchers, designers, and policymakers to bridge
processoutcome divide to implement more human-centered
technology for moral decision-making in clinical settings.</p>
      <p>To support these arguments, the following sections begin
by presenting a brief synthesis of current perspectives on
using AI for resource allocation during COVID-19. We
observe that the predominant ideas on making resource
allocation decisions reflect a process-outcome divide where the
current debate is heavily tilted toward framing AI as an
entity that can create outcomes which are either problematic or
solution-oriented in nature. This approach disregards human
processes such as empathy, intuition, and structural and
agentic knowledge that health care providers rely upon to
make resource allocation decisions. This is followed by
noting the possible effects of long-term deployment of
outcome-centric technology on human decision-making such as
disruptions in intuitive processes, moral outsourcing and
deskilling, and relationships with patients. It is recommended
here that the process-outcome divide must be bridged by
creating human-centered AI systems which can preempt
adverse consequences for providers and patients.
Human-centered AI systems in health care can be incorporated by
having providers work as co-developers of technology, building
in-house AI capabilities, and developing regulations
pertaining to the use of AI in health care settings.</p>
    </sec>
    <sec id="sec-2">
      <title>AI and Resource Allocation: An Outcome</title>
    </sec>
    <sec id="sec-3">
      <title>Centric Approach to Decision-Making</title>
      <p>
        Fairness is a recurrent theme in the discussion on the use of
AI to make moral decisions. The concept of fairness when
applied to any resource allocation process refers to the
instance whereby a decision results in reduction of biases that
may prioritize one group over another or where it increases
equity between different stakeholders. The goal of any
decision-maker, therefore, is to increase fairness as an outcome.
However, the debate on the use of AI with reference to
fairness is a debate comprising competing narratives. Some
argue that AI-based tools in health care resource allocation
during COVID-19 pandemic are useful because machines
are driven by complex logic and predetermined parameters,
and therefore, can be fair and less biased resource a
        <xref ref-type="bibr" rid="ref6">llocators
(Shea et al. 2020</xref>
        ) . This perspective frames AI-enabled
intelligent decision support systems as solutions that can
amplify fairness.
      </p>
      <p>
        However, a competing narrative suggests the opposite. It
argues that AI systems can create problems as they may
reflect existing systemic biases and thus, are more likely to
make unfair appraisals exacerbating inequality between
different racial and socioeconomic groups
        <xref ref-type="bibr" rid="ref2">(Röösli et al. 2020)</xref>
        .
For instance, a study found that a commercial algorithm
factored in health care costs more heavily compared to
physiological symptoms of illness which led to sicker Black
patients being provided with fewer services (Obermeyer et al.
2019). In the case of a pandemic such as COVID-19, an
AIbased tool might use underlying health conditions (e.g.
obesity, heart problems) or disability to predict a lower chance
of recovery for a patient suffering from virus-induced
complications which would affect the likelihood of them
receiving a hospital bed. An AI system such as this is likely to
perpetuate unfair outcomes by giving people who suffer
from ailments due to socioeconomic inequalities a lower
chance of accessing a health resource.
      </p>
      <p>
        In addition to fairness, another theme concerning the use
of AI tools in health care resource allocation pertains to the
effects of AI’s computational prowess on various abstract
and material outcomes. It has been argued that unlike
humans, machines have extraordinary computing and
information processing power which allows them to gather,
analyze, and interpret data quickly which ultimately helps with
timely diagnosis and provision of care (Shea et a
        <xref ref-type="bibr" rid="ref6">l. 2020</xref>
        ).
This leads to not only determining who gets resources, but
helps save health care costs, effort, and time
        <xref ref-type="bibr" rid="ref1">(Adly et al.
2020)</xref>
        . A differential take suggests that AI’s very
computational reach and scope can backfire if a corrupt algorithm
incorrectly diagnoses or distributes resources to many
people in a very short amount of time.
      </p>
      <p>It is evident that current perspectives discussed above on
deploying AI to make resource allocation decisions are
concerned with tackling abstract (e.g. fairness) or material
outputs (e.g. costs). Such framing is not only outcome-centric,
but overly simplifies complex phenomena concerning moral
decision-making such as resource allocation. The
application of AI-enabled technology to allocate resources occurs
in human contexts where decision-makers use distinct and
identifiable processes to make moral decisions. An
AI-enabled system neither accounts nor substitutes for these
processes and therefore, the need for identifying and discussing
human processes becomes ever more important to not only
fill theoretical voids, but also affect how we design and
implement technology.</p>
    </sec>
    <sec id="sec-4">
      <title>The Role of Human Decision-Making Processes in Diagnoses and Resource Allocation</title>
      <p>Clinical decision-making in the realm of resource allocation
is a multifaceted activity which requires providers to use
complex decision-making processes in addition to
predefined scientific and well-established protocols. The sections
below present a brief discussion on the distinct processes
which are regularly employed in moral decision-making by
health care providers but are yet to be explored within the
context of resource allocation especially during a pandemic.</p>
    </sec>
    <sec id="sec-5">
      <title>Intuition</title>
      <p>
        Intuition refers to an information processing mode which
lacks conscious reasoning but incorporates affective and
cognitive elements to make decisions (Sinc
        <xref ref-type="bibr" rid="ref6">lare and
Ashkanasy 2005</xref>
        ). Doctors and caregivers often use their
intuitions as a part of their clinical decision-making processes in
addition to using the guidelines and medical procedure (Van
de Brink et al. 2019; Rew 2000). Evidence for the use of
intuition or ‘gut feelings’ to allocate resources is evidenced
across other cultures (Le Reste et al. 2013; Ruzca et a
        <xref ref-type="bibr" rid="ref6">l.
2020</xref>
        ). Intuitive decision-making process can affect how
health care providers make diagnostic recommendations
which lead to allocation of services. For instance, findings
from a large-scale study on has shown that doctors’
sentiments affected the number of tests their patients received in
an ICU setting (Ghassemi et a
        <xref ref-type="bibr" rid="ref6">l. 2018</xref>
        ). This suggests that
providers’ intuition plays an important role in making
diagnostic and subsequently resource allocation decisions.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Structural and Agentic Knowledge</title>
      <p>Health providers often have a deep understanding of
structural and agentic variables that underlie and affect their
dayto-day operations. For instance, they may know how their
hospital’s location affects resource procurement, patient
arrival and admission. Providers are also more are more likely
to be aware of differences in hospital personnel personality
types, work ethics, interpersonal relationships, cultural
values and political beliefs, bureaucratic procedures and
administrative conduct, equipment issues, etc. Together, this
conscious and subconscious knowledge of structural and
agentic factors can guide providers’ moral decision-making
and thus, how they allocate resources (Lemieux-Charles et
al. 1993). The current technology can hardly substitute this
knowledge as it exists beyond the purview of an AI-enabled
tool.</p>
    </sec>
    <sec id="sec-7">
      <title>Empathetic Concern</title>
      <p>Many diagnostic and moral decisions are driven by
empathetic concern for others (Selph et al. 2008). Hence, it is no
surprise that care providers often take an empathetic
approach to identify illnesses or allocate specific resources.
Empathetic concern plays are a very important role in
identifying biases in any system, policy, and practice. For
instance, when we reflect on a process empathetically, we are
more likely to understand how it affects people which can
therefore allow us to intervene to help and make changes for
them (Batson 2016). To illustrate this further, let’s
reconsider the findings on the use of algorithm which led to Black
patients being given access to fewer resources despite them
being sicker since the tool factored the costs of health care
more heavily when allocating services (Obermeyer et al.
2019). If the AI program were developed using an
empathetic approach, then it might have accounted for the fact
that Blacks on average have lower income than White
patients and thus, are less likely to spend on hospital services
despite having more physiological symptoms. This shows
that the design and use of technology to make moral
decisions without any concern for empathetic human processes
can reflect in the outcomes technology produces.</p>
      <p>While intuition, structural and agentic knowledge, and
empathetic concern are important processes that help and
guide moral decision-making in clinical (and non-clinical
settings), they are largely ignored in the debate on the use of
AI to allocate resources since competing narratives are
mostly focused on the outcomes the technology produces.
This raises the question of what the deployment of AI means
for human processes in decision-making. To help
researchers, administrators, and policy makers engaged in long-term
planning and thinking, I present some reflections on the
possible effects of AI-enabled tools devoid of elements
concerning human processes.</p>
    </sec>
    <sec id="sec-8">
      <title>Deployment of AI-Enabled Tools for Resource</title>
    </sec>
    <sec id="sec-9">
      <title>Allocation: A Note on Potential Consequences</title>
      <p>The focus of this paper is on moral decisions pertaining to
resource allocation especially within pandemic-related
settings. Since many people compete for limited resources and
there is little time to make decisions; it is tempting, and in
some cases, advantageous to apply AI-based tools.
However, when humans use AI-enabled tools to make moral
decisions, their internal decision-making processes are likely
to be affected or influenced by technology. This could affect
how providers assess, analyze, and treat patients. However,
short-term solutions can potentially have long-term
unintended and unwanted effects. The following sections note
some of the consequences on providers’ decision-making
processes concerning intuition, knowledge, and empathetic
concern which may occur as a function of long-term
deployment of AI.</p>
    </sec>
    <sec id="sec-10">
      <title>Disrupted and Conflicted Intuition</title>
      <p>
        As AI continues to be incorporated in moral
decision-making, providers will have to divide their attention between the
AI’s recommendations and their own intuitive judgement
especially if they have different or opposing ideas. They will
face the added tension of determining tradeoffs between the
machine and their own mora
        <xref ref-type="bibr" rid="ref6">ls (Grote and Berens 2020</xref>
        )
especially when applied to moral decision-making such as
resource allocation. Such scenarios will require additional
human cognitive input and will be more likely to interrupt the
intuitive approaches doctors already use to make allocate
scarce resources decisions.
      </p>
      <p>It is arguable that the addition of AI may could facilitate
the doctors’ decision-making processes by sharing the
cognitive burden pertaining to diagnostic evaluation. However,
it is important to note that the process of moral
decisionmaking comprises more than a more than a mechanical
diagnostic endeavor. It also includes how users react to and
accept or react suggestions from AI. Prior research has
shown people’s tendency to both accept and reject advice
from algorithms (Dietvorst et al. 2014; Logg et al. 2019) and
therefore, it is likely that such judgements pertaining to the
recommendations made by AI will also be made by doctors
in conjunction with their own intuitive responses.</p>
      <p>Disrupted and conflicted intuition is likely to affect the
internal moral compass decision-makers use to organize
their worlds. It is also going to reflect in how they allocate
resources where some may exclusively rely on technology
to mitigate their internal tensions and others may develop
their own course of action. Although it is possible that
providers use a combination where they select when to choose
intuitive or machine judgement to make decisions, this will
be a hard skill to learn, and thus, difficult to use especially
in emergencies where decisions have to be made quickly.</p>
    </sec>
    <sec id="sec-11">
      <title>Moral Outsourcing and Deskilling</title>
      <p>
        Assigning and rationing resources between people is an
issue that is directly tied to the issues of ethics and morality.
Making moral decisions can be a difficult and distressing
process because it typically involves trade-offs concerning
self-interests and group needs, personal and cultural values,
and immediate and future rewards within the context of
health care
        <xref ref-type="bibr" rid="ref21">(McCarthy and Deady, 2008; Wright et al.
1997)</xref>
        . As such, it requires that a decision-maker gives them
careful attention, thought, deliberation, along with engaging
in interactions with others. Moral decision-making is thus, a
skill that is learned over time and with consistent practice by
care providers. The deployment of AI-based tools to help
with moral decision-making creates an increased risk for
moral outsourcing i.e. the tendency to allow machines to
make moral decisions for us (see Danaher 2015). This is
especially likely due to the human bias where machines are
often considered fairer (
        <xref ref-type="bibr" rid="ref6">Lee 2018</xref>
        ). Thus, while the use of
machine-based tools to help us make difficult decisions is
inevitable, an over reliance on AI to make ethical and moral
decisions is problematic because it may lead to moral
deskilling.
      </p>
    </sec>
    <sec id="sec-12">
      <title>Health Care Provider-Patient Relationships</title>
      <p>
        A patient’s relationship with their health provider is based
on several factors including the providers’ abilities to
empathize and make decisions that show off their competence,
expertise, and clinica
        <xref ref-type="bibr" rid="ref6">l prowess (Larson and Yao 2005</xref>
        ). With
the incorporation of AI in clinical settings, some have
argued that the use of AI could augment providers’
competency by helping them think about alternative diagnostic
options or providing them with feedback on their performance.
These factors could amplify the trust patients place in
physicians (Nundy et al. 2019). This could be one of the
outcomes of AI application in a regular clinical setting.
However, pandemics with high-mortality rates where resource
shortages affect day-to-day functioning of hospitals and
clinics, the use of AI to make critical diagnostic and
subsequently allocation decisions could theoretically be viewed
differently by patients. Reliance on AI could lead patients
and their families to question providers’ competency to and
treat and care for patients along with their ability to be fair.
Patient doubts on providers’ competence could amplify if
the technology commits errors or is found to be biased
(Nundy et al. 2019). Thus, we can imagine that situations
such as these could easily erode the trust and belief patients
and their families place in health care providers.
      </p>
    </sec>
    <sec id="sec-13">
      <title>Bridging the Process-Outcome Divide: To</title>
      <p>ward the Development of Human-Centered AI</p>
    </sec>
    <sec id="sec-14">
      <title>Resource Allocation Systems in Health Care</title>
      <p>Now that we have identified that there is a process-outcome
divide on how moral decision-making is conceptualized,
discussed, and applied within clinical settings, the next
question is what we can do about it? The bridging of the
process-outcome divide in moral decision-making can occur
with the development of human-centered resource
allocation AI systems as applied to clinical settings. A
human-centered approach to AI development incorporates the
perspectives and processes of users that use intelligent systems (see
Xu 2019). Thus, AI systems using a human-centered
approach are more likely to create synergy between both
human decision-making processes and machine outcomes to
positively affect and amplify both physician and patient
welfare. That being said, the challenge remains as to how we
can achieve human-centered design in the deployment and
development of AI tools.</p>
      <p>To overcome this challenge, we first need to further
unpack the process-outcome divide in the context of moral
decision-making pertaining to resource allocation within a
pandemic (or non-pandemic) setting. The “process” in the
process-outcome divide refers to the human
decision-making processes such as empathy, intuition, and structural and
agentic knowledge which health care providers use (in
addition to pre-determined clinical protocols and guidelines) to
make diagnostic judgements and allocation decisions.
While the “outcome” refers to the machine-driven or related
consequences or functionality such as maximizing fairness
or computational capacity. Hence, process-outcome divide
by nature can be said to also imply a human-machine divide.
Note that here the term human-machine divide is not meant
in the same way as its prior use in the context of technical
features of the machine and how they are informed by
human biology (e.g. neurons) (see Warwick 2015). The focus
here is on how the process-outcome divide in perspectives
on moral decision-making pertaining to resource allocation
reflect the split between human processes of
decision-making and machine-generated outcomes pertaining to resource
allocation decisions. I argue that this divide could
potentially be addressed by moving toward human-centered AI
systems which will require recognizing and iteratively
integrating human processes in the development, deployment,
management, and regulation of AI-enabled systems. To this
end, the following sections present some recommendations
on how human processes can be weaved into the
development of AI systems.</p>
    </sec>
    <sec id="sec-15">
      <title>Health Care Providers as Co-Developers of AI</title>
    </sec>
    <sec id="sec-16">
      <title>Technology</title>
      <p>
        An admittedly simplified way of understanding how
AI-enabled technology is scaled is to reflect on two stages:
development and deployment. More often than not, these tools are
developed either independently (i.e. by
manufacturer/company or within academic settings) or in some consultation
with health care providers. Once developed, they may be
pitched to various health care providers where the
technology is customized to their needs. Sometimes, the technology
is rolled out in phases where it is tested on a smaller level
and subsequently expanded to include more patients or units
(Gago et a
        <xref ref-type="bibr" rid="ref6">l. 2005</xref>
        ). Thus, by and large, development of AI
technology is followed by its deployment with lagged or
punctuated feedback from the user to the developer This
practice indicates a bifurcation between the developers and
users (here: health care providers).
      </p>
      <p>This approach to the scaling of an AI-enabled decisions
support system may seem natural and functional. However,
I argue that for including human processes of
decision-making in how technology is used; it is best to see development
and deployment linked together in an interactive and
iterative process where they inform each other. This is
particularly important in health care settings where the availability
of and access to resources vary and the environment (e.g.
infection and mortality rates, deaths, policy, information)
change rapidly and often unpredictably.</p>
      <p>To creative an iterative loop between development and
deployment, the lines between and users of technology have
to be blurred. While developers of technology can
understand its various technical aspects and have the requisite
knowledge and skills to build it; the users can often envisage
its effects and uses more deeply due to their day-to-day
experience, exposure to patients’ needs, and structural and
personnel-based issues in clinical settings. To understand this
further, let us imagine that an AI program is designed to help
decide if patient gets a bed during a pandemic. The program
conducts a risk assessment of the severity of patient’s
condition by assigning scores on a pre-determined set of factors.
One of the factors relates to prior health condition where the
AI assigns a score in case a patient has any (e.g. heart
problem). However, health care providers may know via their
day-to-day experiences that it is likely for a patient without
a prior health record in the hospital’s system and yet having
an underlying condition to arrive in the emergency
department. The patient might be unaccompanied and unable to
report their medical history due to physical ailment or
language barrier. They could also be unaware of their
underlying medical condition. In such a scenario, the use of an AI
program that determines if a resource (e.g. bed.) can be
allocated to a patient based on the above-mentioned criterion
may not be the most appropriate option. If providers and
developers work in an interactive and iterative fashion, then
these observations could be passed on to the developers who
may be able to account for these issues i.e. a lack of prior
medical record, or language barrier, or being
unaccompanied along with obvious severity in symptoms when
assessing risk. An AI program could then use a different
scoring system which accounts for these variables to allocate
beds. Thus, continual integration of human processes via
relevant updates and modifications is more advantageous
than one-time testing or multi-phase testing with a
pre-determined end.</p>
      <p>Prior research has shown that users and developers can
co-create technology by contributing their differing
expertise in a process called cooperative prototyping (Bødker and
Grønbaek 1991). However, the rapidly changing health
environments require us to leap from the cooperative
prototyping approach to iterative cooperative development and
management of technology. When health care providers will act
as co-developers of technology, it will allow them to fuse
human processes (e.g. empathy, intuition) involved in moral
decision-making to build and update AI systems.</p>
      <p>That being said, it must be mentioned that from normative
and prescriptive perspectives, the extension of providers as
co-developers of technology may sound like an appealing
and useful idea. However, from a practical point of view,
this may prove to be a difficult enterprise because it would
require health care facilities to dedicate personnel and their
time toward the development of such systems. Presumably,
this thought may be a deterrent for some to adopt such
measures and protocols. However, I argue that in the long
run, this will be a small cost to bear. A team of health
professionals who are dedicated to testing AI-enabled
intelligent decision support systems can only better the technology
which in turn will produce superior outcomes and decrease
the cost of day-to-day operations as well as reduce risks
associated with poorly designed systems. Consider this with
reference to the latest developments in space science. IBM
developed a robot called CIMON which was tested for its
efficacy by an astronaut aboard the International Space
Station (ISS) (CIMON brings AI to the International Space
Station n.d.). The feedback and testing allowed for a new and
upgraded robot CIMON-2 to be sent to the ISS (IBM 2020).
The developers and user (i.e. astronaut) of the space robot
played important roles in both the development and
deployment of the technology before it could be used in a
highstakes environment such as a space mission. Health care
settings should be treated no less than a space mission as they
are high-stakes and expense-laden environments which
affect socioeconomic and mortal outcomes for billions of
people around the world. It logically follows then AI-enabled
decision support systems within health care should not only
be tested regularly but be informed by the very users who
employ it making critical resource allocation decisions.</p>
    </sec>
    <sec id="sec-17">
      <title>Developing AI-Focused In-House Capacities</title>
      <p>As decision-makers, people are managed by others such as
human resource departments, administrative procedures or
protocols, upper managements, etc. These institutional
actors and protocols manage human activities, solve issues,
and recommend further actions. Management also extends
to medical equipment as hospitals and clinics often have
technical staff or support teams on site or procured via
thirdparty contracts. However, such staff or administrative units
are often missing when it comes to overseeing AI-enabled
technology. Many health care settings deploy AI with little
to no oversight of these systems since their management
requires particular skill sets. Increasing sophistication of
AIenabled systems and their authority to pass judgement
(assign risks to patients/calculate scores) makes them not only
tools but also decision-making actors to some extent. As
actors and tools, they too, need supervision. Therefore, health
care administrators will need to develop in-house expertise
and create departments that are specifically dedicated to the
monitoring, modification, and management of AI-enabled
systems or embodied intelligent assistants such as robots
which have increasingly become a part of health care
settings.</p>
      <p>Such an endeavor would have several benefits as it would:
a. allow the integration of providers’ perspectives within the
AI system, b. identify any issues quickly, and c. make
modifications to the system potentially within the clinical
settings or outsource them to the third-party or original
developers within a short period of time.</p>
    </sec>
    <sec id="sec-18">
      <title>Developing AI-Specific Regulations, Protocols, and Ethical Guidelines and Educating Providers</title>
      <p>It was argued above that one of the long-term consequences
of using AI to make moral decisions could manifest as
humans putting in more cognitive effort and challenging their
intuition especially if personal judgement were at odds with
the advice given by an AI program. It was also suggested
that an over-reliance on intelligent systems could lead to
moral outsourcing and deskilling. To preempt such
scenarios, health care providers will need to create specific
protocols, regulatory, and ethical guidelines to regulate the use of
AI within their premises. These guidelines and directives
will need to specify whose judgment—i.e. human or AI—
will be considered as the final say when making a diagnostic
or allocation decision. These guidelines will also have to
delineate parameters of assigning culpability and
responsibility if choices made by a human in conjunction with or
against the advice of AI results in adverse outcomes. These
regulations will help providers understand their roles in
moral decision-making and allow them to continue
sharpening their skills when it comes to moral decision-making in
the presence of AI.</p>
      <p>Additionally, medical schools and educational programs
will also need to train providers and students on how to
interact with AI, evaluate its judgement, and effects on human
decision-making. Together, these practices will allow
providers to better understand AI, its management, and
relationship with humans within clinical settings.</p>
    </sec>
    <sec id="sec-19">
      <title>Conclusion</title>
      <p>The core argument presented in this paper is that the
discussion on AI decision support tools used in moral
decisionmaking such as resource allocation within clinical settings
provides competing narratives which delineate the pros and
cons of AI in terms of the material and abstract outcomes
the technology produces. Such perspectives distract us from
focusing on the role human decision-making processes such
as empathy, intuition, and structural and agentic knowledge
play in resource allocation decisions. This scenario reflects
process-outcome divide in the current perspectives on moral
decision-making within health care settings. If these human
processes are disregarded while AI is used to make moral
decisions, it may result in long-term consequences such as
conflicted intuition, moral outsourcing and deskilling, and
poor patient-provider relationships. To preempt some of
these consequences and create better health outcomes;
researchers, developers and policymakers seriously consider
the importance of human processes along with
machinedriven outcomes. One of the ways we can bridge the
process-outcome divide is to create human-centered AI
systems specific to health care. To this end, some
recommendations are proposed: a. health care providers work with
developers of technology as co-developers in an in an iterative
and interactive fashion, b. health care facilities should
develop in-house AI expertise and create a specific department
to manage, regulate, and modify the technology, and c.
regulatory protocols and guidelines specific to the use of AI in
making moral decisions should be developed. These
guidelines should be able to specify how and when humans should
override AI decisions. It should also delineate rules on
culpability should a decision made in conjunction or against AI
advice produce adverse effects. Furthermore, providers and
students should be trained on understanding the effects of
AI on their decision-making. Together, these endeavors
could help with taking and implementing a broader and
more human-centered perspective on the use of AI in health
care to advance social good.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Adly</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adly</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Adly</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Approaches Based on Artificial Intelligence and the Internet of Intelligent Things to Prevent the Spread of COVID-19: Scoping Review</article-title>
          .
          <source>Journal of Medical Internet Research</source>
          .
          <volume>22</volume>
          (
          <issue>8</issue>
          ): e19104. doi.org/10.2196/19104 Batson,
          <string-name>
            <surname>C. D.</surname>
          </string-name>
          <year>2016</year>
          .
          <article-title>Empathy and Altruism</article-title>
          .
          <source>In The Oxford Handbook of Hypo-egoic Phenomena</source>
          , edited by K. W. Brown, and M.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>R.</given-names>
            <surname>Leary</surname>
          </string-name>
          ,
          <volume>161</volume>
          -
          <fpage>174</fpage>
          . New York: Oxford University Press Bødker, Susanne, and
          <string-name>
            <surname>Grønbaek</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>1991</year>
          .
          <article-title>Cooperative Prototyping: Users and Designers in Mutual Activity</article-title>
          .
          <source>International Journal of Man-Machine Studies</source>
          .
          <volume>34</volume>
          (
          <issue>3</issue>
          ):
          <fpage>453</fpage>
          -
          <lpage>78</lpage>
          . doi.org/10.1016/
          <fpage>0020</fpage>
          -
          <lpage>7373</lpage>
          (
          <issue>91</issue>
          )
          <fpage>90030</fpage>
          -
          <string-name>
            <given-names>B</given-names>
            <surname>CIMON Brings</surname>
          </string-name>
          <article-title>AI to the International Space Station</article-title>
          .
          <source>Accessed October 15</source>
          ,
          <year>2020</year>
          . Retrieved from https://www.ibm.com/thoughtleadership/innovation_explanations/article/cimon-ai-inspace.html.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>CIMON-2 Masters Its Debut on the International Space Station, IBM. April 15</source>
          ,
          <year>2020</year>
          . Retrieved from https://newsroom.ibm.com/2020-04-15-CIMON-2
          <article-title>-Masters-Its-Debut-on-theInternational-</article-title>
          <string-name>
            <surname>Space-Station</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Danaher</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Toward an Ethics of AI assistants: An Initial Framework</article-title>
          .
          <source>Philosophy &amp; Technology</source>
          .
          <volume>31</volume>
          (
          <issue>4</issue>
          ):
          <fpage>629</fpage>
          -
          <lpage>653</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          doi.org/10.1007/s13347-018-0317-3 Debnath,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Barnaby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. P.</given-names>
            ,
            <surname>Coppa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Makhnevich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Kim</surname>
          </string-name>
          , E.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Hirsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            ,
            <surname>Zanos</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. P.</surname>
          </string-name>
          , &amp;
          <source>The Northwell COVID-19 Research Consortium</source>
          .
          <year>2020</year>
          .
          <article-title>Machine Learning to Assist Clinical Decision-Making During the COVID-</article-title>
          19
          <source>Pandemic. Bioelectronic Medicine</source>
          .
          <volume>6</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi.
          <source>org/10.11186.s4234-020-00050-8</source>
          <string-name>
            <surname>Dietvorst</surname>
            ,
            <given-names>B. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simmons</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Massey</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err</article-title>
          .
          <source>Journal of Experimental Psychology. General</source>
          <volume>144</volume>
          (
          <issue>1</issue>
          ):
          <fpage>114</fpage>
          -
          <lpage>26</lpage>
          . doi.org/10.1037/xge0000033 Gago,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            ,
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Cortez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Neves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            , and
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          <year>2005</year>
          .
          <article-title>INTCare: A Knowledge Discovery Based Intelligent Decision Support System for Intensive Care Medicine</article-title>
          .
          <source>Journal of Decision Systems</source>
          .
          <volume>14</volume>
          (
          <issue>3</issue>
          ):
          <fpage>241</fpage>
          -
          <lpage>59</lpage>
          . doi.org/10.3166/jds.14.
          <fpage>241</fpage>
          -259 Ghassemi,
          <string-name>
            <given-names>M. M.</given-names>
            ,
            <surname>Al-Hanai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Raffa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            ,
            <surname>Mark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. G.</given-names>
            ,
            <surname>Nemati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            , and
            <surname>Chokshi</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. H.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>How is the Doctor Feeling? ICU Provider Sentiment Is Associated with Diagnostic Imaging Utilization</article-title>
          .
          <source>2018 40th Annual International Conference of the IEEE Engineering In Medicine And Biology Society (EMBC)</source>
          ,
          <year>Honolulu</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          4058-
          <fpage>4064</fpage>
          . IEEE. doi.org/10.1109/EMBC.
          <year>2018</year>
          .
          <volume>8513325</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Grote</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Berens</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>On the Ethics of Algorithmic Decision-Making in Healthcare</article-title>
          .
          <source>Journal of Medical Ethics</source>
          .
          <volume>46</volume>
          (
          <issue>3</issue>
          ):
          <fpage>205</fpage>
          -
          <lpage>211</lpage>
          . doi.org/10.1136/medethics-2019-105586 Lamanna,
          <string-name>
            <given-names>C.</given-names>
            , and
            <surname>Byrne</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm</article-title>
          .
          <source>AMA Journal of Ethics</source>
          ,
          <volume>20</volume>
          (
          <issue>9</issue>
          ):
          <fpage>E902</fpage>
          -
          <lpage>E910</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          doi.org/10.1001/amajethics.
          <year>2018</year>
          .902 Larson,
          <string-name>
            <given-names>E. B.</given-names>
            , &amp;
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          <year>2005</year>
          .
          <article-title>Clinical Empathy as Emotional Labor in the Patient-Physician Relationship</article-title>
          .
          <source>The Journal of the American Medical Association</source>
          .
          <volume>293</volume>
          (
          <issue>9</issue>
          ):
          <fpage>1100</fpage>
          -
          <lpage>1106</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          doi.org/10.1001/jama.293.9.1100 Lemieux-Charles,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Meslin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            ,
            <surname>Aird</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            , and
            <surname>Leatt</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>1993</year>
          .
          <article-title>Ethical Issues Faced by Clinician/managers in ResourceAllocation Decisions</article-title>
          .
          <source>Hospital &amp; Health Services Administration</source>
          <volume>38</volume>
          (
          <issue>2</issue>
          ):
          <fpage>267</fpage>
          -
          <lpage>85</lpage>
          . Doi.org Le Reste, J.-Y.,
          <string-name>
            <surname>Coppens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barais</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nabbe</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Le Floch</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiron</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dinant</surname>
            ,
            <given-names>G. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berkhout</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stolper</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Barraine</surname>
          </string-name>
          , P.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <year>2013</year>
          .
          <article-title>The Transculturality of “Gut Feelings”. Results from a French Delphi Consensus Survey</article-title>
          .
          <source>The European Journal of General Practice</source>
          .
          <volume>19</volume>
          (
          <issue>4</issue>
          ):
          <fpage>237</fpage>
          -
          <lpage>243</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          doi.org/10.3109/13814788.
          <year>2013</year>
          .779662 Logg,
          <string-name>
            <given-names>J. M.</given-names>
            ,
            <surname>Minson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            , and
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. A.</surname>
          </string-name>
          <year>2019</year>
          .
          <article-title>Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes</article-title>
          .
          <volume>151</volume>
          :
          <fpage>90</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          doi.org/10.1016/j.obhdp.
          <year>2018</year>
          .
          <volume>12</volume>
          .005
          <string-name>
            <surname>McCarthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            , &amp;
            <surname>Deady</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2008</year>
          . Moral Distress Reconsidered.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>Nursing</given-names>
            <surname>Ethics</surname>
          </string-name>
          .
          <volume>15</volume>
          (
          <issue>2</issue>
          ):
          <fpage>254</fpage>
          -
          <lpage>262</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          doi.org/10.1177/0969733007086023 Medlej,
          <string-name>
            <surname>K.</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Calculated Decisions: Sequential Organ Failure Assessment (SOFA) Score</article-title>
          .
          <source>Emergency Medicine Practice</source>
          .
          <volume>20</volume>
          (
          <issue>10</issue>
          ):
          <fpage>CD1</fpage>
          -
          <lpage>CD2</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          2019.
          <article-title>Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence</article-title>
          .
          <source>The Journal of the American Medical Association</source>
          . doi.org/10.1001/jama.
          <year>2018</year>
          .20563 Obermeyer,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Powers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Vogeli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            , &amp;
            <surname>Mullainathan</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.</surname>
          </string-name>
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <article-title>Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations</article-title>
          .
          <source>Science</source>
          .
          <volume>366</volume>
          (
          <issue>6464</issue>
          ):
          <fpage>447</fpage>
          -
          <lpage>453</lpage>
          . doi.org/10.1126/science.aax2342 Rew,
          <string-name>
            <surname>L.</surname>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Acknowledging Intuition in Clinical Decision Making</article-title>
          .
          <source>Journal of Holistic Nursing</source>
          .
          <volume>18</volume>
          (
          <issue>2</issue>
          ):
          <fpage>94</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          doi.org/10.1177/089801010001800202 Robert,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Kentish-Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Boyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Laurent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Azoulay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            , &amp;
            <surname>Reignier</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>Ethical Dilemmas Due to the COVID-</article-title>
          19
          <source>Pandemic. Annals of Intensive Care</source>
          .
          <volume>10</volume>
          (
          <issue>1</issue>
          ):
          <fpage>84</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          doi.org/10.1186/s13613-020-00702-7
          <string-name>
            <surname>Röösli</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Rice</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Bias at Warp Speed: How AI may Contribute to the Disparities Gap in the Time of COVID-19</article-title>
          .
          <source>Journal of the American Medical Informatics Association. 1-3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          doi.org/10.1093/jamia/ocaa210 Selph,
          <string-name>
            <given-names>R. B.</given-names>
            ,
            <surname>Shiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Engelberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Curtis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            , &amp;
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>B.</surname>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Empathy and Life Support Decisions in Intensive Care Units</article-title>
          .
          <source>Journal of General Internal Medicine</source>
          ,
          <volume>23</volume>
          (
          <issue>9</issue>
          ):
          <fpage>1311</fpage>
          -
          <lpage>1317</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>doi.org/10.1007/s11606-008-0643-8</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>