<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>K. K. Denton, D. L. Krebs, Rational and emotional sources of moral decision-making: An
evolutionary-developmental account, Evolutionary Psychological Science</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1007/s40806</article-id>
      <title-group>
        <article-title>Moral Reasoning in Third-Person Ethics for HRI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hannah Wiedemann</string-name>
          <email>hannah.wiedemann@iit.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serena Marchesi</string-name>
          <email>serena.marchesi@iit.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Agnieszka Wykowska</string-name>
          <email>agnieszka.wykowska@iit.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Moral Decision Making, Human-Robot Interaction, Value Projection</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Neuroscience and Rehabilitation, University of Ferrara</institution>
          ,
          <addr-line>Ferrara</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>S4HRI - Istituto Italiano di Tecnologia</institution>
          ,
          <addr-line>Genova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>3</volume>
      <issue>2016</issue>
      <fpage>72</fpage>
      <lpage>85</lpage>
      <abstract>
        <p>As robots assume morally sensitive roles in our environments, understanding how humans make moral decisions for others becomes crucial. This study investigates third-person moral reasoning using dilemmas where participants chose between deontological and utilitarian actions on behalf of a confederate, mirroring real-world scenarios such as medical contexts, where decisions can be made on behalf of someone else. Results show that personal relationship dilemmas prompted more rule-based choices, while life-and-death scenarios led to longer response times, suggesting greater cognitive conflict. These findings ofer a baseline for future comparisons with robot-directed decisions and shed light on how people navigate ethical choices in complex social settings.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Moral decision-making is a complex cognitive process that draws upon fundamental psychological
functions, including perception, memory, and attention [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These capabilities enable individuals to
interpret social scenarios, retrieve relevant experiences, and identify critical information, all of which
are essential for evaluating the moral appropriateness of a given action [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In real-world situations,
especially when individuals are confronted with moral dilemmas, these processes work in tandem to
guide judgments about what is right or wrong; and shape subsequent behaviour (for a review see [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]).
      </p>
      <p>
        As intelligent systems, including robots, become increasingly integrated into our daily lives — from
autonomous vehicles to digital assistants — they inevitably become part of our social environments
and moral implications [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This integration raises important questions about how we make moral
decisions involving artificial agents, and whether our reasoning difers depending on whether the agent
is human or machine [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In many real-life situations—such as a doctor deciding on a treatment plan
for an unconscious patient—people must make moral decisions on behalf of others. Understanding
human moral decision-making in these contexts is therefore essential for building ethically aligned and
socially acceptable technologies [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        One influential model of moral judgment is Greene’s [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] dual-process theory, which posits that two
distinct cognitive systems, emotional and rational, underlie moral reasoning. Emotional responses are
typically fast, automatic, and associated with deontological judgments, where the morality of an action
is judged based on rules or duties. In contrast, rational processing is slower, deliberative, and linked to
utilitarian judgments, where moral decisions are based on outcomes and the greater good [10]. This
theory highlights how competing cognitive systems can lead to diferent moral conclusions depending
on which system is dominant in a given context.
      </p>
      <p>Importantly, diferent moral decision-making contexts, such as issues concerning privacy, health, or
harm, can significantly influence how individuals arrive at their moral judgments [
11]. For example,
people may rely more on emotional reasoning in high-stakes, personal scenarios, whereas impersonal
(A. Wykowska)</p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
or abstract dilemmas may elicit more utilitarian reasoning [12]. These contextual variations must be
taken into account to fully understand how moral decisions are made.</p>
      <p>
        Moral reasoning does not occur in a vacuum. It is shaped by the social and situational context in
which decisions are made. Recent advances in human-robot interaction (HRI) raise critical questions
about how humans apply moral principles when interacting with artificial agents [ 13]. While extensive
research has explored moral judgment in human-human contexts, far less is known about how people
make moral decisions when robots are involved [14]. Specifically, it remains unclear whether humans
apply the same moral frameworks to robots as they do to other humans.
1.1. Aim
To understand moral decision-making in human–robot interaction, it is crucial to first examine how
people make such decisions for other humans. We hypothesize that participants will make more
deontological judgments when deciding on behalf of another human, driven by emotional and intuitive
processing. This aligns with dual‑process theories of moral cognition [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In contrast, in future
experiments where the human agent is replaced by a robot—devoid of human-like emotional cues,
participants are expected to favor more utilitarian decisions, as the removal of emotional engagement
may diminish deontological inclinations. Empirical work supports this prediction: people tend to expect
robots to make utilitarian choices (e.g., sacrificing one to save many) more than humans do, and view
such decisions as more permissible when performed by robots [15]. While our broader project will
directly compare responses to human versus robot agents, this study represents the initial, foundational
step—establishing a baseline of human moral judgment. By focusing on decisions made for another
human, we lay the groundwork for future comparisons with robotic agents, which will allow us to
interpret how moral reasoning (utilitarian vs. deontological responses) may difer in the context of
robots.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <sec id="sec-2-1">
        <title>2.1. Participants</title>
        <p>Twenty-one participants (11 female; age 19–65, M = 32.00 SD = 13.40) were recruited, with a final
sample of 20 (10 female; M = 31.10, SD = 13.73) after excluding one due to withdrawal. This represents
a preliminary dataset and further recruitment is ongoing. All had normal or corrected vision, no
neurological conditions, were naïve to the study’s purpose, and fluent in Italian. The study was
approved by the Comitato Etico Regione Liguria. Participants gave written informed consent, were
debriefed post-experiment, and received €10 compensation.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Experimental Design</title>
        <p>In this study, participants completed a moral decision-making task simulating third-person ethical
reasoning. In each trial, they read a moral dilemma involving a confederate who asked, “What should I
do?” Participants chose between a deontological or utilitarian response on the confederate’s behalf.
The main manipulation was the type of dilemma, with each scenario presented once. Future studies
will vary the agent (human vs. robot).</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Apparatus and Stimuli</title>
        <p>As shown in Figure 1, participants sat next to a research assistant (confederate). They viewed the
experimental procedure on a 27-inch Dell monitor (120 Hz, 2560×1440), while the confederate used a
15.6-inch Dell laptop (60 Hz, 1920×1080) running the experiment. The procedure was programmed in
PsychoPy 2022.2.4 [16].</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Scenarios and Validation</title>
        <p>The scenarios and response options were generated using ChatGPT 4o [17]. 50 scenarios were then
validated through an online questionnaire through Prolific. The validation process is described in [ 18].
After validation, the total amount of validated scenarios and answer options were 42 (See Table 1 for an
example). Furthermore, the dilemmas were categorized into seven distinct groups for future analysis.
A terminally ill patient asks if their condi- Tell the truth
tion is improving. The truth is that they
only have a few weeks left to live, and
telling them might cause immense distress.</p>
        <p>However, lying might provide them
comfort in their final days.</p>
        <sec id="sec-2-4-1">
          <title>Utilitarian Answer</title>
        </sec>
        <sec id="sec-2-4-2">
          <title>Lie to the patient</title>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Experimental Setup</title>
        <p>The confederate introduced himself and explained the experiment.</p>
        <p>Participants completed 42 randomly presented trials. Each trial began with a scenario displayed
on screen until the participant pressed the space bar. A fixation cross (1000 ms) and a beep (500 ms)
Utilitarian
Deontological</p>
        <p>Truth vs.</p>
        <p>Compassion</p>
        <p>Crime &amp;
Justice</p>
        <p>Personal
Relationships</p>
        <p>Freedom vs.</p>
        <p>Security</p>
        <p>Social Ethics in</p>
        <p>Life &amp; Death Responsibility Technology/ Science</p>
        <p>Category
preceded the confederate’s prompt—“What should I do?”—shown for 3000 ms. Two response options
then appeared (left/right, counterbalanced). Participants selected their choice using the ’A’ (left) or ’L’
(right) key. After responding, a fixation cross appeared for 1000 ms before the next trial began.</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. Statistical Analysis</title>
        <p>Statistical data analysis was conducted using custom-made scripts in R Studio (version 2022.07.1) [19]
[20].</p>
        <p>Generalized Linear Mixed-Efects Models (GLMMs, glmer function) and Linear Mixed-Efects Models
(LMMs, lmer function) were computed using the lme4 package [19]. The lmerTest package [21] was
applied to obtain p-values for fixed efects in the LMMs. Parameters estimated (  ) for fixed efects and
their associated t-tests (t, p-value) were calculated using the Satterthwaite approximation method for
degrees of freedom. For visualisation of results, the ggplot2 [22] and ggefects [ 23] packages were used.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <sec id="sec-3-1">
        <title>3.1. Choice Outcome</title>
        <p>Raw data visualization is presented in Figure 2.</p>
        <p>We investigated whether diferent types of moral dilemmas systematically influenced individuals’
choices by applying a generalized linear mixed-efects model. The binary outcome variable, choice
(utilitarian vs. deontological), was modeled as a function of the fixed efect category (e.g., life and death,
crime and justice), with random intercepts for both participants and dilemmas to account for repeated
measures and clustering. The model employed a binomial logistic link function and was estimated
using maximum likelihood with a Laplace approximation.</p>
        <p>Including category as a fixed efect allowed us to test whether responses to particular categories of
dilemmas difered from the designated reference category ( Truth vs Compassion). Random intercepts
for participants and dilemmas controlled for individual- and scenario-specific variability.</p>
        <p>The fixed efects analysis indicated that most categories did not difer significantly from the reference
category. However, the Personal Relationships category was associated with a significantly higher
log-odds of making a deontological choice relative to Truth vs Compassion (estimate = 2.80, SE = 1.34, z
= 2.08, p = .037). No other categories showed statistically significant diferences from the reference. A
full set of model coeficients is provided in the supplementary materials.</p>
        <p>The model intercept (estimate = 0.107, p = .863) represents the estimated log-odds of a deontological
choice in the reference category (Truth vs Compassion). Thus, the intercept should be interpreted
narrowly as the expected value for that category, rather than as a general or overall baseline.</p>
        <p>Regarding random efects, substantial variability was observed across dilemmas (variance = 1.943, SD
= 1.394), indicating that the specific scenario influenced responses strongly. Participant-level variance
was smaller (0.285, SD = 0.534) but still indicated individual diferences in choice tendencies.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. RT Outcome</title>
        <p>A linear mixed-efects model was used to investigate the efects of category and choice on log-transformed
response time. The model, estimated using restricted maximum likelihood, included random intercepts
for both participant and dilemma.</p>
        <p>The fixed efects analysis suggested that most categories did not difer significantly from the reference
(Truth vs Compassion). However, the Life and Death category was associated with longer response times
relative to the reference (estimate = 0.362, p = .028; Overall M= 5.24, SD= 6.25; Deontological M= 6.63,
SD= 7.19; Utilitarian M= 4.34, SD= 5.39). The Social Responsibility category showed a marginal trend
toward slower responses (estimate = 0.304, p = .094), although this efect was not statistically significant.</p>
        <p>The efect of choice was positive but non-significant (estimate = 0.069, p = .139), indicating that
deontological vs. utilitarian responding did not, in itself, reliably afect response time.</p>
        <p>The model intercept (estimate = 0.922,  &lt; .001 ) reflects the estimated mean log response time in the
reference condition (Truth vs Compassion), rather than a general baseline across all categories.</p>
        <p>With respect to random efects, the variance associated with dilemmas (0.0787) was greater than
that associated with participants (0.0431), suggesting that scenario-level diferences contributed more
strongly to variation in response times than individual diferences. The residual variance was 0.3051,
reflecting considerable within-condition variability in response times.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and Conclusion</title>
      <p>This study aimed to establish a baseline for understanding how people make moral decisions on behalf
of others in human-agent scenarios. Our findings highlight the complexity and variability of such
judgments, particularly across diferent types of moral dilemmas.</p>
      <p>We found that dilemmas involving personal relationships prompted more deontological responses,
suggesting that close interpersonal bonds increase rule based reasoning—likely due to stronger emotional
engagement. In contrast, other categories did not predict moral choices consistently, indicating more
mixed or conflicted reasoning.</p>
      <p>Life and Death dilemmas did not shift moral choices significantly but were associated with longer
response times, suggesting greater cognitive conflict and engagement of slower, more reflective processing,
in line with Greene’s dualprocess model.</p>
      <p>These findings underscore the complexity of moral reasoning when individuals make decisions for
others. The fact that some dilemma types afect moral preferences while others do not suggests the
need for a more detailed understanding of the contextual, emotional, and interpersonal factors that
shape moral judgment.</p>
      <sec id="sec-4-1">
        <title>4.1. Future Directions</title>
        <p>This study establishes a baseline for comparing human-to-human moral decisions with those involving
robot agents. Future work will explore whether the presence of a robot alters moral reasoning or shifts
the balance between deontological and utilitarian choices. These insights will inform how people adapt
ethical frameworks in technologically mediated settings and guide the development of ethically aligned
AI systems.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This work has received support from the European Union under the European Innovation Council (EIC)
research and innovation programme, Project “VaLue-aware AI (VALAWAI)”, Grant Agreement number
101070930.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT, in order to: Create stimuli, Grammar
and spelling check, Paraphrase and reword. After using this tool, the authors reviewed and edited the
content as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Garrigan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Adlam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Langdon</surname>
          </string-name>
          ,
          <article-title>Moral decision-making and moral development: Toward an integrative framework</article-title>
          ,
          <source>Developmental Review</source>
          <volume>49</volume>
          (
          <year>2018</year>
          )
          <fpage>80</fpage>
          -
          <lpage>100</lpage>
          . URL: https://doi.org/10.1016/j. dr.
          <year>2018</year>
          .
          <volume>06</volume>
          .001. doi:
          <volume>10</volume>
          .1016/j.dr.
          <year>2018</year>
          .
          <volume>06</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Frith</surname>
          </string-name>
          , T. Singer,
          <article-title>The role of social cognition in decision making</article-title>
          ,
          <source>Philosophical Transactions of the Royal Society B Biological Sciences</source>
          <volume>363</volume>
          (
          <year>2008</year>
          )
          <fpage>3875</fpage>
          -
          <lpage>3886</lpage>
          . URL: https://doi.org/10.1098/rstb.
          <year>2008</year>
          .
          <volume>0156</volume>
          . doi:
          <volume>10</volume>
          .1098/rstb.
          <year>2008</year>
          .
          <volume>0156</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kar</surname>
          </string-name>
          ,
          <article-title>Social cognition and individual efectiveness in interpersonal scenarios: A conceptual review</article-title>
          ,
          <source>Journal of Mental Health and Human Behaviour</source>
          <volume>22</volume>
          (
          <year>2017</year>
          )
          <fpage>27</fpage>
          -
          <lpage>34</lpage>
          . URL: https://doi.org/10. 4103/
          <fpage>0971</fpage>
          -
          <lpage>8990</lpage>
          .210705. doi:
          <volume>10</volume>
          .4103/
          <fpage>0971</fpage>
          -
          <lpage>8990</lpage>
          .
          <fpage>210705</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Guglielmo</surname>
          </string-name>
          ,
          <article-title>Moral judgment as information processing: An integrative review</article-title>
          ,
          <source>Frontiers in Psychology</source>
          <volume>6</volume>
          (
          <year>2015</year>
          )
          <article-title>1637</article-title>
          . URL: https://doi.org/10.3389/fpsyg.
          <year>2015</year>
          .
          <volume>01637</volume>
          . doi:
          <volume>10</volume>
          .3389/fpsyg.
          <year>2015</year>
          .
          <volume>01637</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Prescott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Robillard</surname>
          </string-name>
          ,
          <article-title>Are friends electric? The benefits and risks of human-robot relationships</article-title>
          , iScience
          <volume>24</volume>
          (
          <year>2020</year>
          )
          <article-title>101993</article-title>
          . URL: https://doi.org/10.1016/j.isci.
          <year>2020</year>
          .
          <volume>101993</volume>
          . doi:
          <volume>10</volume>
          . 1016/j.isci.
          <year>2020</year>
          .
          <volume>101993</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Z. O</given-names>
            <surname>'Reilly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wykowska</surname>
          </string-name>
          ,
          <article-title>The impact of action descriptions on attribution of moral responsibility towards robots</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>15</volume>
          (
          <year>2025</year>
          )
          <article-title>4128</article-title>
          . URL: https://doi.org/10.1038/ s41598-024-79027-5. doi:
          <volume>10</volume>
          .1038/s41598-024-79027-5.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Boddington</surname>
          </string-name>
          ,
          <article-title>The rise of AI ethics</article-title>
          ,
          <source>in: AI Ethics, Artificial Intelligence: Foundations, Theory, and Algorithms</source>
          , Springer, Singapore,
          <year>2023</year>
          . URL: https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-19-9382-
          <issue>4</issue>
          _2. doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-19-9382-
          <issue>4</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Marchesi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wykowska</surname>
          </string-name>
          ,
          <article-title>Designing robots that are accepted in human social environments: Anthropomorphism, the intentional stance, cultural norms and values, and societal implications</article-title>
          , in: L.
          <string-name>
            <surname>Fortunati</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Edwards (Eds.), The De Gruyter Handbook of Robots in Society and Culture, De Gruyter, Berlin, Boston,
          <year>2024</year>
          , pp.
          <fpage>63</fpage>
          -
          <lpage>84</lpage>
          . URL: https://doi.org/10.1515/
          <fpage>9783110792270</fpage>
          -
          <lpage>004</lpage>
          . doi:
          <volume>10</volume>
          .1515/
          <fpage>9783110792270</fpage>
          -
          <lpage>004</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Greene</surname>
          </string-name>
          ,
          <article-title>The secret joke of kant's soul</article-title>
          , in: W. Sinnott-Armstrong (Ed.),
          <source>Moral Psychology</source>
          , Volume
          <volume>3</volume>
          : The Neuroscience of Morality, MIT Press,
          <year>2001</year>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>