=Paper= {{Paper |id=Vol-1419/paper0047 |storemode=property |title=Should Robots Kill? Moral Judgments for Actions of Artificial Cognitive Agents |pdfUrl=https://ceur-ws.org/Vol-1419/paper0047.pdf |volume=Vol-1419 |dblpUrl=https://dblp.org/rec/conf/eapcogsci/HristovaG15 }} ==Should Robots Kill? Moral Judgments for Actions of Artificial Cognitive Agents== https://ceur-ws.org/Vol-1419/paper0047.pdf
   Should Robots Kill? Moral Judgments for Actions of Artificial Cognitive Agents
                                         Evgeniya Hristova (ehristova@cogs.nbu.bg)
                                          Maurice Grinberg (mgrinberg@nbu.bg)
                        Center for Cognitive Science, Department of Cognitive Science and Psychology,
                                                  New Bulgarian University
                                           21 Montevideo Str., Sofia 1618, Bulgaria


                            Abstract                                      rights of autonomous robots to kill people (or by their acts
  Moral dilemmas are used to study the situations in which
                                                                          to lead to loss of human lives), about their responsibility for
  there is a conflict between two moral rules: e.g. is it                 their acts, and about how people would judge their
  permissible to kill one person in order to save more people. In         behaviour were only part of science-fiction novels and
  standard moral dilemmas the protagonist is a human.                     movies.
  However, the recent progress in robotics leads to the question             Today however, the issue of the moral agency of artificial
  of how artificial cognitive agents should act in situations             cognitive agents (robots, AI systems, etc.) has been
  involving moral dilemmas. Here, we study moral judgments                transformed from a popular science fiction topic into a
  when the protagonist in the dilemma is an artificial cognitive
  agent – a humanoid robot or an automated system – and                   scientific, engineering, and even legislative problem (e.g. see
  compare them to moral judgments for the same action taken               Sullins, 2006 ; Wallach & Allen, 2008). Robots capable of
  by a human agent. Participants are asked to choose the                  taking decisions and inflicting harm are already in use. Recent
  appropriate protagonist action, to evaluate the rightness and           progress in robotics has led to the availability on the market of
  the moral permissibility of the utilitarian action, and the             robots and smart systems not only for industrial, but also for
  blameworthiness of the agent. We also investigate the role of           personal use (e.g. caregivers, interactive robots, etc.), and, more
  the instrumentality of the inflicted harm. The main results are
                                                                          importantly, for military use: military robots or ‘killing
  that participants rate the utilitarian actions of a humanoid
  robot or of an automated system as more morally permissible             machines’ are already used in military conflicts (Sparrow,
  than the same actions of a human. The act of killing                    2007; Wallach & Allen, 2008). All this research, however,
  undertaken by a humanoid robot is rated as less blameworthy             concerns mainly existing robots or prototypes of robots, or
  than the action done by a human or by an automated system.              discusses how to build future robots as moral agents.
  The results are interpreted and discussed in terms of                      In this paper, we are interested in exploring how people
  responsibility and intentionality as characteristics of moral           would judge the harmful actions of a humanoid robot who
  agency.
                                                                          supposedly will be exactly like a human in terms of
  Keywords: moral dilemmas; moral judgment; artificial                    experiences and mind, but with a non-organic body. Our
  cognitive agents; humanoid robots                                       expectation is that despite the fact that such a robot will
                                                                          have all the capabilities required for full moral agency,
                        Introduction                                      people will perceive the robot differently than a human
                                                                          agent.
Moral Dilemmas and Artificial Cognitive Agents                               Thus, the main research interest in the present paper is
Moral judgments, or more generally, the judgments of what                 focused on the influence of the perceived moral agency of a
is right and wrong, have been of great interest to                        human and of artificial protagonists who have identical or
philosophers, psychologists and other scientists for                      comparable cognitive or/and experiential capabilities in
centuries. Apart from the practical importance of better                  moral dilemmas.
understanding moral judgments and related actions, morality
is an essential part of human social and cognitive behaviour.             Moral Agency and Artificial Cognitive Agents
Therefore, its understanding from various perspectives is a               In recent years, the possibility for moral agency of artificial
challenging task with important implications. The situations              agents has been a matter of hot debate (e.g. see Anderson &
in which moral judgments can be studied in their purest                   Anderson, 2011; Wallach & Allen, 2008). Once robots are
form are the so called moral dilemmas – imagined situations               authorized to kill in complex situations where dilemmas are
in which there is a conflict between moral values, rules,                 to be solved, real-time decisions are necessary to determine
rights, and agency (Foot, 1967; Thomson, 1985).                           whether killing any particular person is justified. These
   Moral dilemmas are typically related to situations in                  problems will become crucial in the future, when robots will
which a number of people will inevitably die if the                       be fully autonomous (not controlled by a human operator) in
protagonist does not intervene by undertaking some actions                assessing the situation, making decisions and intentionally
which typically lead to the death of another person (who                  executing actions judged appropriate by them (Sparrow,
otherwise may or may not be threatened) but also to the                   2007; Wallach & Allen, 2008).
saving of the initially endangered people.                                  In law and philosophy, moral agency is taken to be
   In the standard description of such moral dilemmas, the                equivalent to moral responsibility, and is not attributed to
protagonist is a human. Until recently, questions about the               individuals who do not understand or are not conscious of


                                                                    306
what they are doing (e.g. to young children). Sullins (2006)           capability for communication lead to different beliefs about
analyzes under what conditions a robot can be regarded as a            the agents’ closeness to human social agents. The humanoid
moral agent. Moral agency, according to the author, can be             robot was very close to the human agent, while the
attributed to a robot when it is autonomous to a sufficient            computer was at the same level in terms of “mind-
extent from its creators and programmers, and it has                   readerness” but very low relative score on “mind-
intentions to do good or harm. The latter is related to the            holderness”. It was found using neuroimaging techniques
requirement that the robot behaves with understanding and              that the different attitude in terms of these two dimensions
responsibility with respect to other moral agents. Or in other         can be related to selective modulation of distinct brain
words: "If the complex interaction of the robot's                      regions related to social interaction (Takahashi et al., 2014).
programming and environment causes the machine to act in               An interesting result for the present study is the ordering in
a way that is morally harmful or beneficial, and the actions           terms of “mind-holderness” in which a computer has the
are seemingly deliberate and calculated, then the machine is           lowest rating, and then comes the mechanical robot, the
a moral agent." (Sullins, 2006). This definition is formulated         interactive robot, the human-like android, and at the end a
from the perspective of an observer of the robot’s action.             human with the highest rating.
   It is well known that people easily anthropomorphize                   The results of Takahashi et al. (2014) seem to show that
nonhuman entities like animals and computers, so it is                 activity in social brain networks depend on the specific
expected that they would also ascribe some degree of moral             experiences with social agents. Social interaction with
agency, intentions, and responsibilities to those non-human            human-like or seemingly intelligent agents could activate
entities (Wallach & Allen, 2008; Waytz, Gray, Epley, &                 selectively our social brain and lead to behavior similar to
Wegner, 2010). Several studies, summarized below, explore              the one people have with other humans. Thus, Takahashi et
the attribution of mind and moral agency to artificial                 al. (2014) demonstrated that people can infer different
cognitive systems.                                                     characteristics related to various cognitive abilities based on
   In the study of (Gray, Gray, & Wegner, 2007),                       short communication sessions and act accordingly. Based on
participants had to evaluate several characters including              this result, we can assume that people could accept a robot
humans, a robot, some animals, etc. with respect to various            to be as sensitive and intelligent as a human as was
capacities of mind. As a result, two dimensions in mind                described in the moral situations in our experiment.
perception were identified, called by the authors                         From the presented discussion of moral agency, it seems
‘experience’ and ‘agency’. The experience dimension was                clear that people do not perceive existing non-human agents
related to capacities like hunger, pain, consciousness, etc.,          in the same way as they perceive human agents and
while the agency dimension – to capacities such as memory,             therefore cannot ascribe them the same level of moral
planning, thought, etc. (see for details Gray et al., 2007).           agency (Strait, Briggs, & Scheutz, 2013).
The authors further establish that moral judgments about                  Here, we investigate what will be people’s perception of
punishment correlated more with the agency dimension than              moral agency in moral dilemmas for human and non-human
with the experience dimension: perceived agency is                     protagonists with identical or comparable mental capacities.
correlated with moral agency and responsibility. On the                To our knowledge, this problem has not been explored
other hand, desire to avoid harming correlates with the                before in the literature.
experience dimension: perceived experience is connected
with moral patience, rights and privileges. One result of                               Goals and Hypothesis
Gray et al. (2007), relevant for the present paper, is the             The main goal of the present paper is to investigate how
evaluation of a human as having the highest scores in                  people make moral judgments about the actions of artificial
experience and agency and the evaluation of the robot to               cognitive agents in hypothetical situations posing a moral
have practically zero score on the experience dimension and            dilemma when the agents are identical or comparable to
half the maximal score on the agency dimension. This                   humans in terms of agency and/or experiential capacities.
would imply that following the interpretation given by Gray               The question under investigation is how people evaluate
et al. (2007), robots will be judged as less morally                   the appropriateness of the utilitarian action (sacrificing one
responsible for their actions than humans.                             person in order to save five other people) if it has to be
   In a recent study (Takahashi et al., 2014), the perception          performed by an artificial cognitive agent compared to the
of the participants about five agents – a human, a human-              same action done by a human.
like android, a mechanical robot, an interactive robot, and a             The experiment was also aimed to collect ratings on the
computer – was investigated. The study found that                      rightness, moral permissibility, and blameworthiness of the
participants position the agents in a two dimensional space            action undertaken. The rationale for using various ratings is
spanned by “mind-holderness” (the possibility for the agent            the following. On one hand, readiness for an action and
to have a mind) and “mind-readerness” (the capability to               judgment of this action as a moral one could diverge (e.g.
“read” other agents’ minds). This classification found                 one could find an action to be moral and still refrain from
support in the way a simple game was played subsequently               doing that action and vice versa). On the other hand, there
against each agent, and by means of brain imaging                      are studies demonstrating that different questions used to
techniques. The results showed that the appearance and the             reveal moral judgments are in fact targeting different



                                                                 307
aspects and different psychological process (Christensen &                                                    Table 1. Stimuli used in the experiment.
Gomila, 2012; Cushman, 2008).
   According to Cushman (2008), answers to questions                                                     Human: No description, just the name is provided – Cyril –
about punishment and blame are related to the harm the                                                   a common male name in Bulgarian.
agent has caused, whereas answers to question about                                                      Humanoid robot: The year is 2050. Humanoid robots that
rightness and moral permissibility are related to the                                                    look like people are being manufactured and used, but are
                                                                                                         made from inorganic materials. Robots have extremely high




                                                                         Description of the agent
intentions of the agent. Thus asking these questions
                                                                                                         performance – they perceive, think, feel, and make decisions
concerning human and non-human protagonists can shed                                                     as humans do. Keido is such a humanoid robot that
light on how people perceive such agents with respect to                                                 completely resembles a human – he looks like a human;
moral agency.                                                                                            perceives, thinks, feels and make decisions like a human.
   Recent research has shown that robots are ascribed lower                                              Automated system: The year is 2050. MARK21 is a fully
agency than humans (Gray et al., 2007) and it is expected                                                automated management system, which independently makes
that the utilitarian action of killing a person in order to save                                         its own decisions, based on the most advanced algorithms
several people will be judged as more right, more morally                                                and technologies. Such systems are widely used in
permissible, and less blameworthy for robots than for                                                    metallurgical plants. They completely independently
                                                                                                         perceive and assess the environment and the situation, make
humans. Killing will be even more right and permissible for
                                                                                                         decisions, manage the movement of cargo and all aspects of
the automated intelligent system as it differs more from a                                               the manufacturing process.
human than the robot by lacking any experiences and                                                      Cyril/Keido/MARK21 manages the movement of mine
making decisions based on the best decision making                                                       trolleys     with     loads    in    a   metallurgical plant.
algorithms available (see Table 1 for the description of the                                             Cyril/Keido/MARK21 noticed that the brakes of a loaded

                                                                         Situation
agents in the current study). On the other hand, the                                                     trolley are not functioning and it is headed at great speed
description of the robot agent makes it clear that the robot                                             toward five workers who perform repair of the rails. They do
cannot be distinguished from a human with the exception of                                               not have time to escape and they will certainly die.
the material he is built of (see Table 1). Thus, if moral                                                Nobody, except for Cyril/Keido/MARK21, can do anything
agency of the robot is identical to that of a human, the                                                 in this situation.
                                                                                                         Instrumental scenario: The only thing Cyril/Keido/MARK21
question is what will be the moral agency ascribed by the
                                                                                                         can do is to activate a control button and to release the safety
participants, especially when it comes to making decisions                                               belt of a worker hanging from a platform above the rails.
about human lives.                                                                                       The worker will fall onto the rails of the trolley. Together
                                                                         Possible resolution




   The study was aimed at clarifying the factors behind                                                  with the tools that he is equipped with, the worker is heavy
moral judgment in such more complex hypothetical                                                         enough to stop the moving trolley. He will die, but the other
situations. Our expectation is that despite the fact that the                                            five workers will stay alive.
experiential and/or the agency capacities of the human and                                               Incidental scenario: The only thing Cyril/Keido/MARK21
artificial agents are almost identical, people will evaluate the                                         can do is to activate a control button and to release a large
moral agency of the non-human agents to be inferior to the                                               container hanging from a platform. It will fall onto the rails
                                                                                                         of the trolley. The container is heavy enough to stop the
moral agency of a human agent.
                                                                                                         moving trolley. On the top of the container there is a worker
   Another goal of the study is to explore the influence of                                              who will also fall on the rails. He will die, but the other five
the so-called ‘instrumentality’ of harm on moral judgments.                                              workers will stay alive.
The instrumentality of harm is an important factor in moral                                              Instrumental scenario: Cyril/Keido/MARK21 decides to
dilemma research (e.g., Borg et al., 2006; Hauser et al,
                                                                         Agent’s action and resolution




                                                                                                         activate the control button and to release the safety belt of
2007; Moore et al., 2008). It draws attention to the fact that                                           the worker hanging from the platform. The worker falls onto
harm could be either inflicted intentionally as a ‘mean to an                                            the rails of the trolley and as together with the tools that he is
end’ (instrumental harm) or it could be a ‘side effect’                                                  equipped with, the worker is heavy enough, he stops the
(incidental harm) from the actions needed to save more                                                   moving trolley. He dies, but the other five workers stay
endangered people. It has been found that the unintended                                                 alive.
                                                                                                         Incidental scenario: Cyril/Keido/MARK21 decides to
incidental harm (although being foreseen) was judged as                                                  activate the control button and to release the container
more morally permissible than the intended instrumental                                                  hanging from the platform. It falls onto the rails of the
harm (Hauser et al., 2007; Moore et al., 2008).                                                          trolley and as the container is heavy enough, it stops the
   Based on previous research (e.g. Hristova, Kadreva, &                                                 moving trolley. The worker onto the top of the container
Grinberg, 2014, and references therein), we expect the                                                   dies, but the other five workers stay alive.
utilitarian action to be found as more appropriate, more
right, more morally permissible, and less blameworthy                                                                           Method
when the harm is incidental (compared to instrumental).
Consistently with our expectation for the different moral                Stimuli and Design
agency ascription, we expect that the difference in moral                Moral judgments are studied in a 3×2 factorial design with
judgments for the artificial and human agents will be greater            identity of the agent (human vs. humanoid robot vs.
when the harm is instrumental, as such actions involve more              automated system) and the instrumentality of harm
responsibility and respectively more moral agency.                       (instrumental vs. incidental) as between-subjects factors.



                                                                   308
   Two hypothetical scenarios are used – an instrumental               correctly the question assessing the reading and the
one and an incidental one. Both scenarios present one and              understanding of the presented scenario. So, responses of
the same situation and require one and the same action –               159 participants (117 female, 42 male; 83 students, 76 non-
activating a control button – in order to save the five                students) are analyzed.
endangered people while causing the death of another
person. The difference between the scenarios is only in the                                       Results
harm inflicted to the person to be killed: in the instrumental
scenario the body of the person is the ‘instrument’                    Decisions about the protagonist’s action
preventing the death of the five endangered people; while in           Proportion of participants in each experimental condition
the incidental scenario, a heavy container is used to stop the         choosing the option that the agent should carry on the
trolley and the death of the person is a by-product.                   utilitarian action (activating a control button and thus
   In each scenario, the identity of the agent is varied (a            sacrificing one person and saving five people) is presented
human, a robot, or an automated system) by providing a                 in Table 2.
name for the protagonist and an additional description in the
case when the protagonist is a robot or an automated system.                   Table 2: Proportion of the participants in each
   The full text of the stimuli is provided in Table 1.                     experimental condition choosing the option that the
                                                                           utilitarian action should be implemented by the agent
Dependent Measures and Procedure
As stated above, the experiment explored various                        Agent                    Instrumental     Incidental     All
dimensions of moral judgments; therefore several dependent                                       harm             harm
measures are used.                                                      Human                    0.57             0.81           0.70
  The first dependent measure assessed the evaluation of                Humanoid robot           0.73             0.84           0.76
the participants about the appropriateness of the agent’s               Automated system         0.73             0.86           0.80
action to save five people by sacrificing one person.                   All                      0.66             0.84
Participants were asked what should be done by the agent
using a dichotomous question (possible answers are ‘should                Data is analyzed using a logistic regression with
activate the control button’ or ‘should not activate the               instrumentality of harm and identity of the agent as
control button’).                                                      predictors. Wald criterion demonstrated that only
  After that, the participants are presented with the                  instrumentality of harm is a significant predictor of the
resolution of the situation in which the agent has made the            participants’ choices (p = .011, odds ratio = 2.64). Identity
utilitarian action and the participants had to make three              of the agent is not a significant predictor.
judgments on 7-point Likert scales. Participants rated the                More participants stated that the utilitarian action should be
rightness of the action (1 = ‘completely wrong’, 7 =                   undertaken when the harm is incidental (84% of the
‘completely right’), the moral permissibility of the action (1         participants) than when it is instrumental (66% of the
= ‘not permissible at all’, 7 = ‘it is mandatory’), and the            participants). The effect is expected based on previous research
blameworthiness of the agent (1 = ‘not at all blameworthy’,            (Borg et al., 2006; Hristova et al., 2014; Moore et al., 2008).
7 = ‘extremely blameworthy’).
  The flow of the presentation of the stimuli and the                  Rightness of the Action
questions is the following. First, the scenario is presented           Mean ratings of the rightness of the action undertaken by
(description of the agent, the situation and the possible              the protagonist are presented in Table 3 and are analyzed in
resolution, see Table 1) and the participants answer a                 a factorial ANOVA with the identity of the agent (human
question assessing the comprehension of the scenario. Then             vs. humanoid robot vs. automated system) and the
the participants make a judgment about the appropriateness             instrumentality of harm (instrumental vs. incidental) as
of the proposed agent’s action answering a question about              between-subjects factors.
what the protagonist should do. Next, the participants read
a description of the utilitarian action undertaken by the                   Table 3: Mean ratings of the rightness of the action
agent and the resolution of the situation (the protagonist             undertaken by the protagonist on a 7-point Likert scale (1 =
activates the control button, one man is dead, the other 5                    ‘completely wrong’, 7 = ‘completely right’)
people are saved – see Table 1). After that the participants
answer the questions about the rightness of the action, the             Agent                    Instrumental     Incidental     All
moral permissibility of the action, and the blameworthiness                                      harm             harm
of the agent for carrying on the action.                                Human                    4.2              4.2            4.2
  Data is collected using web-based questionnaires.                     Humanoid robot           4.8              4.5            4.6
                                                                        Automated system         5.0              4.8            4.9
Participants                                                            All                      4.7              4.5
185 participants filled in the questionnaires online. Data of
26 participants were discarded as they failed to answer



                                                                 309
  No statistically significant main effects or interactions are          analyzed in a factorial ANOVA with the identity of the
found. There is a tendency for the action undertaken by a                agent (human vs. humanoid robot vs. automated system)
human agent to be judged as being less right compared to                 and the instrumentality of harm (instrumental vs. incidental)
the actions undertaken by artificial agents, but the effect of           as between-subjects factors.
the identity of the agent did not reach statistical significance
(F(2, 153) = 2.19, p = .11).

Moral Permissibility of the Action
Mean ratings of the moral permissibility of the action
undertaken by the protagonist are presented in Figure 1 and
are analyzed in a factorial ANOVA with the identity of the
agent (human vs. humanoid robot vs. automated system)
and the instrumentality of harm (instrumental vs. incidental)
as between-subjects factors.




                                                                         Figure 2: Mean ratings of the blameworthiness of the agent
                                                                         on a 7-point Likert scale (1 = ‘not at all blameworthy’, 7 =
                                                                          ‘extremely blameworthy’). Error bars represent standard
                                                                                                    errors.

                                                                           ANOVA showed a main effect of the identity of the agent
                                                                         (F(2, 153) = 3.12, p = .047). Post hoc tests using the
                                                                         Bonferroni correction revealed that the agent is rated as less
                                                                         blameworthy when he is a humanoid robot (M = 2.4, SD =
                                                                         1.4) than a human (M = 3.1, SD = 1.4) or an automated
                                                                         system (M = 3.0, SD = 1.7), with p = .075 and p = .172,
  Figure 1: Mean ratings of the moral permissibility of the              respectively.
  action undertaken by the protagonist on a 7-point Likert                 There was also a main effect of the instrumentality of
  scale (1 = ‘not permissible at all’, 7 = ‘it is mandatory’).           harm (F(1, 153) = 5.22, p = .024): the agent was rated as
             Error bars represent standard errors.                       less blameworthy when the harm was incidental (M = 2.6,
                                                                         SD = 1.4) than when it was instrumental (M = 3.1, SD =
   ANOVA demonstrated a main effect of the identity of the               1.6).
agent (F(2, 153) = 3.75, p = .026). Post hoc tests using the               The interaction between the factors was not statistically
Bonferroni correction revealed that the action is rated as less          significant.
morally permissible when undertaken by a human (M = 3.7,                   In summary, the protagonist is rated as less blameworthy
SD = 1.6) than when undertaken by a humanoid robot (M =                  in the incidental than in the instrumental scenarios; and also
4.5, SD = 1.9) or by an automated system (M = 4.5, SD =                  when he is a humanoid robot (vs. a human or an automated
1.7) with p = .046 and p = .077, respectively.                           system).
   There was also a main effect of the instrumentality of
harm (F(1, 153) = 4.21, p = .042): killing one person to save                         Discussion and Conclusion
five other persons is rated as more morally permissible                  The paper investigated the problem of how people make
when the harm was incidental (M = 4.5, SD = 1.7) than                    moral judgments in moral dilemmas about the actions of
when it was instrumental (M = 4.0, SD = 1.9).                            human and artificial cognitive agents with comparable
   The interaction between the factors was not statistically             experiential and/or agency capabilities. This was achieved
significant.                                                             by asking participants to evaluate the appropriateness of the
   In summary, the utilitarian action is rated as more                   utilitarian action, its rightness, its moral permissibility, and
permissible in the incidental dilemmas (compared to the                  the blameworthiness of choosing to sacrifice one person to
instrumental dilemmas) and also when it is undertaken by a               save five.
humanoid robot or an automated system (compared to a                       Following arguments put forward in Cushman (2008),
human agent).                                                            such questions can elicit judgments based on causes and
                                                                         intentions related to important characteristics of moral
Blameworthiness of the Agent                                             agency like responsibility and intentionality. The
Mean ratings of the blameworthiness of the agent for                     expectations (based on the previous research on moral
undertaking the action are presented in Figure 2 and are                 agency and mind perception, see e.g. Gray et al., 2007) were



                                                                   310
that participants will perceive differently the human and                                      References
non-human agents in terms of moral agency although the
robot was described to be identical to a human except for
                                                                        Anderson, M., & Anderson, S. L. (2011). Machine ethics.
the fact that she is built with non-organic material.
                                                                          Cambridge University Press.
Additionally, we suspected that people can have stereotypes
                                                                        Borg, J., Hynes, C., Van Horn, J., Grafton, S., & Sinnott-
and prejudices about non-living agents based for instance on
                                                                             Armstrong, W. (2006). Consequences, action, and
religious arguments about the origins of morality.
                                                                             intention as factors in moral judgments: an FMRI
   The results show that there are no statistically significant
                                                                             investigation. Journal of Cognitive Neuroscience,
differences in the judgments for the appropriateness and the
                                                                             18(5), 803–817.
rightness of the utilitarian action for the human, the
                                                                        Christensen, J. F., & Gomila, A. (2012). Moral dilemmas in
humanoid robot, and the automated system.
                                                                             cognitive neuroscience of moral decision-making: A
   At the same time, the utilitarian action undertaken by a
                                                                             principled review. Neuroscience and Biobehavioral
human agent got a lower moral permissibility rating than the
                                                                             Reviews, 36(4), 1249–1264.
same action performed by a humanoid robot or an
                                                                        Cushman, F. (2008). Crime and punishment: Distinguishing
automated system agents. This is consistent with the
                                                                             the roles of causal and intentional analyses in moral
interpretation of rightness and moral permissibility as
                                                                             judgment. Cognition, 108(2), 353–380.
related to intentions (Cushman, 2008) and agency (Gray et
                                                                        Foot, P. (1967). The Problem of Abortion and the Doctrine
al., 2007). Our results can be interpreted by assuming that
                                                                             of Double Effect. Oxford Review, 5, 5–15.
participants were more favorable to the actions of the
                                                                        Gray, H. M., Gray, K., & Wegner, D. M. (2007).
artificial cognitive agents because they are perceived lower
                                                                             Dimensions of mind perception. Science, 315(5812),
on moral agency than the humans.
                                                                             619.
   The results about blameworthiness confirm that
                                                                        Hauser, M., Cushman, F., Young, L., Kang-Xing, J., &
participants distinguish the human agent from the humanoid
                                                                             Mikhial, J. (2007). A Dissociation Between Moral
robot by evaluating the action of the human agent as more
                                                                             Judgments and Justifications. Mind & Language,
blameworthy. The lower rating for blameworthiness for the
                                                                             22(1), 1–21.
robot compared to the one for the human seem to support a
                                                                        Hristova, E., Kadreva, V., & Grinberg, M. (2014). Moral
lower level of moral agency ascription for the robot agent,
                                                                             Judgments and Emotions: Exploring the Role of
although the robot was described as identical to the human
                                                                             'Inevitability of Death'and “Instrumentality of Harm”
except for being non-organic. On the other hand, the
                                                                             (pp. 2381–2386). Austin, TX: Proceedings of the
blameworthiness of the automated system is evaluated at the
                                                                             Annual Conference of the Cognitive Science Society.
same level of blameworthiness as the human. This result
                                                                        Moore, A. B., Clark, B. A., & Kane, M. J. (2008). Who
can be interpreted in terms of consequences (caused harm)
                                                                             shalt not kill? Individual differences in working
by assuming that an automated system is attributed a very
                                                                             memory capacity, executive control, and moral
low level of responsibility and the human designer of the
                                                                             judgment. Psychological Science, 19(6), 549–557.
system should be held responsible instead.
                                                                        Sparrow, R. (2007). Killer Robots. Journal of Applied
   The present study also explores the influence of
                                                                             Philosophy, 24(1), 62–77.
instrumentality of harm on moral judgments. As expected,
                                                                        Strait, M., Briggs, G., & Scheutz, M. (2013). Some
incidental harm was judged to be more permissible, more
                                                                             correlates of agency ascription and emotional value and
right, more morally permissible, and less blameworthy.
                                                                             their effects on decision-making. In Affective
These findings apply to both human and artificial cognitive
                                                                             Computing and Intelligent Interaction, 505-510. IEEE.
agents.
                                                                        Sullins, J. (2006). When is a robot a moral agent?
   In our opinion, the results reported above demonstrate the
                                                                             International Review of Information Ethics, 6, 23-30.
potential of the experimental design, which for the first time
                                                                        Takahashi, H., Terada, K., Morita, T., Suzuki, S., Haji, T.,
uses moral dilemmas situations with non-human
                                                                             Kozima, H., et al. (2014). Different impressions of
protagonists that have similar experiential and/or agency
                                                                             other agents obtained through social interaction
capacities as human agents. Future research should try to
                                                                             uniquely modulate dorsal and ventral pathway
explore if this result is based on stereotypes related to the
                                                                             activities in the social human brain. Cortex, 58(C),
present level of non-human agents, or to a deeper distinction
                                                                             289–300.
between human and human-made artificial cognitive agents
                                                                        Thomson, J. J. (1985). The Trolley Problem. The Yale Law
with respect to moral agency, related to religious or other
                                                                             Journal, 94(6), 1395–1415.
beliefs.
                                                                        Wallach, W., & Allen, C. (2008). Moral Machines:
   Data about moral agency ascription in the case of inaction
                                                                             Teaching Robots Right from Wrong. Oxford University
within the same experimental as the described above has
                                                                             Press.
been gathered and is currently processed. The results will be
                                                                        Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010).
published in a forthcoming publication.
                                                                             Causes and consequences of mind perception. Trends
                                                                             in Cognitive Sciences, 14(8), 383–388.




                                                                  311