<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Vector-Based Extension of Value-Based Argumentation for Public Interest Communication</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pietro Baroni</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giulio Fellin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Massimiliano Giacomin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carlo Proietti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Research Council of Italy (CNR), Institute for Computational Linguistics “A. Zampolli”, Area di ricerca di Genova, Torre di Francia</institution>
          ,
          <addr-line>Via de Marini 6 - 16149 Genova</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Brescia, Department of Information Engineering</institution>
          ,
          <addr-line>via Branze 38 - 25123 Brescia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, we propose a mathematical model to quantify and analyse the impact of public interest communication on target audiences. Building on Bench-Capon's value-based approach, our model introduces the concept of value vectors to represent a multi-dimensional spectrum of values influencing audience perception and response. By employing vectors, we aim to capture the nuanced interplay between diverse values and the efectiveness of communication strategies.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;argumentation</kwd>
        <kwd>value-based model</kwd>
        <kwd>vector space</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>2. Providing a principled method to assess which arguments are justified in a debate, based on
their status (e.g., unattacked or ultimately reinstated by other arguments) as dictated by chosen
semantics.</p>
      <p>This capability allows for an a posteriori analysis of the argumentative content and structure of public
interest campaigns, potentially explaining their success or failure.</p>
      <p>To efectively apply computational argumentation to Public Interest Communication, several
challenges must be addressed:
1. Argument Retrieval: How to mine arguments from public campaigns, which requires identifying
an annotation format for arguments and their mutual relations, such as attacks and supports.
2. Incomplete Information: Public interest campaign materials often provide limited information for
reconstructing and evaluating the generated argumentative content. Arguments are typically from
a single source (the issuing organization), with counterarguments and counter-counterarguments
often implicit or from indirect sources (e.g., social forum discussions), necessitating abductive
reasoning.
3. Diverse Audiences: Diferent individuals assess argumentative structures in various ways due
to difering inferential/epistemic standards and perceptions of defeats. Additionally, real-life
arguments have non-inferential components, such as promoted values, afecting their assessment.</p>
      <p>In this preliminary paper, we place particular emphasis on addressing the challenge of Diverse
Audiences. Our focus is on developing a mathematical model to quantify and analyse how diferent inferential
and epistemic standards, as well as the influence of non-inferential components like promoted values,
afect the assessment of public interest communication among varied target audiences. Arguments can
be represented as vectors of coordinates, where each coordinate represents the strength of the argument
concerning a particular value or aspect. For each individual or audience, a mapping function assigns
weights to arguments, determining which arguments defeat others based on their relative weights.</p>
      <p>
        Building on Bench-Capon’s value-based approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], our model introduces the concept of value
vectors to represent a multi-dimensional spectrum of values influencing audience perception and
response. By employing vectors, we aim to capture the nuanced interplay between diverse values and
the efectiveness of communication strategies. This approach allows for a more detailed and structured
analysis of how diferent arguments resonate with various segments of the audience, providing insights
into the reasons behind the success or failure of public interest campaigns.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] each argument is attributed a single value. The hierarchy of values for a given audience
determines whether or not attacks from (or to) other arguments are successful. This approach is arguably
the most immediate one for representing how a value dimension may be attached to argumentative
discourse. Yet, as a matter of fact, arguments may refer to more than a single value, as soon as they
are more articulated. In view of this, [7] generalises Bench-Capon’s approach by allowing arguments
that refer to multiple values. This is implemented by a function arg attributing a (possibly empty) set
of arguments to each value. Therefore, an argument a has multiple values whenever, for two distinct
values v1 and v2, a belongs to the intersection of arg(v1) and arg(v2); it has no value when a is not a
member of arg(v) for any value v. Contrary to [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], in [7] a total ordering among values does not sufice
to uniquely determine the defeat relation. To this end it is necessary to define an ordering among sets
of values. In [7][Sect. 3.2] three diferent instances of such an ordering are introduced, deriving them
from a primitive ordering among arguments.
      </p>
      <p>The framework we introduce is similar in spirit to that of [7] insofar as it attributes a bundle of n
features (values) to each argument, specified as a vector in an n-dimensional space. Diferent from [ 7],
the vectorial representation allows for grading how much a given value attaches to one argument. This,
in turn, enables a larger fan of possibilities for defining the relative weight of arguments with respect to
a given audience, and therefore the success or failure of an attack. We define such a weight by means of
an impact measure (Sect. 3). It is important to stress that this approach allows in principle to encompass
diferent psychological theories of basic human values such as those by Schwartz [11] and Haidt [4].</p>
      <p>The paper is organised as follows. Section 2 introduces the proposed formal framework, Section 3
describes the impact measure adopted, while Section 4 discusses the notion of convincing argument.
Section 5 explores a generalisation to lists of arguments included in a campaign, while Section 6
concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The framework</title>
      <p>
        We propose a Value-based Argumentation Framework extending the approach in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>In particular, we consider a triple ⟨A, →, Apos⟩ where ⟨A, →⟩ is an argumentation framework—i.e. A
is a set of arguments and → is a binary relation → ⊆ A× A—and Apos ⊆ A. We read a → b as “a attacks
b.” The set Apos will be the set of arguments expressing the goals of the considered communication
campaign. In practical applications, one starts by identifying Apos and then constructs A by adding
possible chains of attacks to elements of Apos.</p>
      <p>By extension, we also say that a set of arguments “S ⊆ A attacks b” (in symbols S → b) if there
is a ∈ S such that a → b. The framework ⟨A, →⟩ is conveniently represented as a directed graph in
which the arguments are vertices and edges represent attacks between arguments.
Example, part 1. Let’s consider a campaign which aims to encourage a shift towards a greener diet
by promoting the environmental and health benefits of reducing meat consumption and increasing
plant-based food intake. We can identify the following set for Apos:1</p>
      <p>Apos = {a1 : Less chronic disease, better overall health and less foodborne illness,
a2 : Better environment: soil, water, air,
a3 : Less animal sufering .}
To build A, we must consider possible attacks to these arguments, and attacks to them, and so on. For
instance, let us take:2</p>
      <p>A = Apos ∪ {b1 : Veganism may be unhealthy, e.g. diferent blood types need diferent diets ,
b2 : Morality is relative,
b3 : Plant-based agriculture still causes harm,
b4 : Not everyone can be vegan,
b5 : There are worse things going on in the world, this is a secondary cause,
b6 : The world is a tough place, so we have to deal with bad things,
c1 : Vegan athletes exist,
c2 : Many nutritional experts state that veganism can be healthy and optimal,
c3 : The blood-type diet theory has been debunked,
c4 : Most people are not moral relativists about unnecessary sufering ,
c5 : Recognizing that the world is cruel is in not an excuse to do harm,
c6 : The goal is to make progress, no one expects the world to become perfect,
d1 : Experts are influenced by financial interests and agendas ,
d2 : Not all experts agree,
e1 : There is consensus among independent experts about the health benefits .}
Then ⟨A, →, Apos⟩ can be represented as in Figure 1.</p>
      <p>The key question is whether a given argument a ∈ Apos will convince the public of our campaign.
In order to propose some ways to answer this question, we endow ⟨A, →, Apos⟩ with some additional
structure.
1The list is taken from [6]. For simplicity, some arguments have been merged.
2We mostly follow [3]. We acknowledge that this list is not exhaustive; it is provided solely for illustrative purposes.
c1</p>
      <p>The set of audiences is a set of the form I = {1, 2, 3, ..., k}, of cardinality k. To each audience i ≤ k
we associate a weight pi. Weights satisfy the following conditions:
∀i≤ k pi ≥ 0,</p>
      <p>k
X pi = 1.
i=1
This takes into account the fact that each i may not represent an individual listener, but a portion of the
whole population that has similar values.</p>
      <p>Let us suppose that we have a fixed (and ordered) set of values (e.g. equality, individual health,
collective health, etc.) of cardinality n.</p>
      <p>In this preliminary paper, we do not argue for a specific set of values, as it falls outside the scope
of the present work. Various solutions can be found in the literature, such as in [8, 9, 10, 11]. For
illustrative purposes we follow the list of classes of values from [8]:
Example, part 2. List of values:
1. Self-direction: thought
2. Self-direction: action
3. Stimulation
4. Hedonism
5. Achievement
6. Power: dominance
7. Power: resources
8. Face
9. Security: personal
10. Security: societal
11. Tradition
12. Conformity: rules
13. Conformity: interpersonal
14. Humility
15. Benevolence: caring
16. Benevolence: dependability
17. Universalism: concern
18. Universalism: nature
19. Universalism: tolerance
20. Universalism: objectivity
arg.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], each argument is associated with an individual value. We believe that this can be refined, as
an argument can rely on more than one value, each with a diferent degree. To this purpose, we define
— the space of values as V = [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]n, each dimension of which is associated with the corresponding
value;
— the value function val : A → V, which assigns each a ∈ A to its vector of values.
It may be possible to refine the model by imposing certain restrictions on the vectors resulting from
Such restrictions should be motivated by further considerations or empirical evidence.
val.
      </p>
      <p>Example, part 3. Let us consider for instance the arguments a1, b1, c2 from part 1 of our running
Example, i.e.</p>
      <p>a1 : Less chronic disease, better overall health and less foodborne illness.
b1 : Veganism may be unhealthy, e.g. diferent blood types need diferent diets
c2 : Many nutritional experts state that veganism can be healthy and optimal.</p>
      <p>Argument a1 relies highly on value 9 – ‘Security: personal’, which includes personal health, and to a
lesser extent to value 10 – ‘Security: societal’. It has little to no connection to other values. Therefore, a
reasonable vector of values can be</p>
      <p>val(a1) = ⟨0, 0, 0, 0, 0, 0, 0, 0, 1, 0.6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0⟩.</p>
      <p>Notice that b1 also relies on the very same values—although with a diferent conclusion—, so val(b1) =
val(a1). Argument c2 instead relies solely on value 20 – ‘Universalism: objectivity’, hence
val(c2) = ⟨0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1⟩.</p>
      <p>Figure 2 presents the vectors of values for all the arguments considered in our running example.</p>
      <p>Each audience i ≤ k will have their own preferences among values. We want to represent this
by introducing the audience specific value function asv : I → V , which assigns to each audience i a
vector whose jth entry represents the importance that audience i gives to value j. As above, further
considerations or empirical evidence may require imposing some restrictions on the vectors resulting
from asv.</p>
      <p>Example, part 4. Suppose that I = {1, 2}, and assign to asv the following values:
i
We also set p1 = .4 and p2 = .6.</p>
      <p>We have defined all the primitive structures that we expect the framework to have. We want to
answer the question from above: given an argument a ∈ Apos, will a convince the audience?</p>
      <p>To this purpose, we introduce in next section an impact measure which aims to capture how much
each argument is efective for a given audience.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The impact measure</title>
      <p>For each audience i ≤ k we introduce their impact measure as a function</p>
      <p>
        ∥ · ∥ i : A → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
      </p>
      <p>We want it to assign to each argument a the impact that a has on i, by evaluating the values on which
a relies and the importance that these values have for i. In particular, we want to ensure that if an
argument a relies on a value that is important to i, then a will have a high impact on i. To this end, we
require that this can be written as a composition
n si R
∥ · ∥ i : A→−− val V ⊆ R →−
In words, given an argument a, its impact on audience i is determined on the basis of its value of vectors
val(a) which is then synthesised into a single real number, measuring the impact, through an audience
specific function si.</p>
      <p>The function si is required to satisfy the following properties:
— It is a seminorm, i.e. for any ⃗x, ⃗y ∈ Rn and λ ∈ R we have3
si(⃗x + ⃗y) ≤ si(⃗x) + si(⃗y),
si(λ · ⃗x) = |λ | · si(⃗x).</p>
      <p>(Subadditivity)
(Absolute homogeneity)
— It satisfies monotonicity: Consider ⃗x = ⟨x1, ..., xn⟩, ⃗y = ⟨y1, ..., yn⟩. If |x1| ≤ | y1|, ..., |xn| ≤ | yn|,
then si(⃗x) ≤ si(⃗y).</p>
      <p>
        — The restriction si |V has codomain contained in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
      </p>
      <p>In particular, monotonicity ensures that if an argument a relies on values which are of higher importance
to i, then a will have a higher impact on i.</p>
      <p>Seminorms satisfy some very nice properties. For instance:
Proposition 3.1. Let si : Rn → R be a seminorm. Then:
3| · | denotes the absolute value.</p>
      <p>1. si(⃗0) = 0.</p>
      <p>2. Nonnegativity: For every ⃗x ∈ Rn, we have si(⃗x) ≥ 0.</p>
      <p>Proof.</p>
      <p>1. Absolute homogeneity implies si(⃗0) = si(0 · ⃗x) = 0 · si(⃗x) = 0.
2. Let ⃗x ∈ Rn. Absolute homogeneity implies si(− ⃗x) = si((− 1) · ⃗x) = | − 1| · si(⃗x) = si(⃗x).</p>
      <p>Subadditivity now implies 0 = si(⃗0) = si(⃗x + (− ⃗x)) ≤ si(⃗x) + si(− ⃗x) = si(⃗x) + si(⃗x) = 2 si(⃗x).
which, in turn, implies 0 ≤ si(⃗x).</p>
      <p>In this paper we consider the following definition for the impact measure:
If we use ∥ · ∥ to denote the Euclidean norm and ⊙ to denote the Hadamard product (i.e. the
componentwise product), then we can write</p>
      <p>
        We want to show that this function indeed satisfies the desired properties:
Proposition 3.2. The function
∥ · ∥ i : A → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ],
v n
uu 1 X (asv(i)j · val(a)j )2.
a 7→ t n
      </p>
      <p>j=1
1
∥a∥i = √n ∥ asv(i) ⊙ val(a)∥.</p>
      <p>si : Rn → R</p>
      <p>
        1
⃗x 7→ √n ∥ asv(i) ⊙ ⃗x∥
is a monotonic seminorm, and the restriction si |V has codomain contained in [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
      </p>
      <p>Proof. First, recall that the Euclidean norm is a monotonic seminorm. In order to prove subadditivity:
1 1
si(⃗x + ⃗y) = √n ∥ asv(i) ⊙ (⃗x + ⃗y)∥ = √n ∥(asv(i) ⊙ ⃗x) + (asv(i) ⊙ ⃗y)∥
≤</p>
      <p>1 1
√n ∥ asv(i) ⊙ ⃗x∥ + √n ∥ asv(i) ⊙ ⃗y∥ = si(⃗x) + si(⃗y).</p>
      <sec id="sec-3-1">
        <title>In order to prove absolute homogeneity:</title>
        <p>1 1
si(⃗λx ) = √n ∥ asv(i) ⊙ ⃗λx ∥ = √n ∥λ (asv(i) ⊙ ⃗x)∥</p>
        <p>
          1
= |λ | √n ∥ asv(i) ⊙ ⃗x∥ = |λ | si(⃗x)
In order to prove monotonicity, consider ⃗x = ⟨x1, ..., xn⟩, ⃗y = ⟨y1, ..., yn⟩ such that |x1| ≤
|y1|, ..., |xn| ≤ | yn|. Then:
In order to prove that the restriction si |V has codomain contained in [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]:
        </p>
        <p>1
si(⃗x) = √n ∥ asv(i) ⊙ ⃗x∥ ≤</p>
        <p>1
√n ∥ asv(i) ⊙ ⃗y∥ = si(⃗y).</p>
        <p>1
0 = √n ∥⃗0∥ ≤ si(⃗x) ≤
√1n ∥⃗1∥ = √1n √n = 1.
is:
where
Example, part 5. In Figure 3 we approximate the values of the impact function in our running example.</p>
        <p>
          Beside the one proposed here, there are several possible ways to define an impact function, and
empirical evidence will be necessary to evaluate and compare these diferent definitions. In particular, one
possibility is to consider additional features of arguments besides values, for instance their presentation
form, their emotional aspects an so on. A simple way to capture these additional features would be
including in the computation a multiplicative factor given by a function E : I × A → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ]:
∥ · ∥ i : A → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ],
        </p>
        <p>v n
a 7→ E(i, a) · utu n1 X (asv(i)j · val(a)j )2.</p>
        <p>j=1</p>
        <p>Further investigation in this direction is left to future work.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Convincing arguments</title>
      <p>In this section we explore the issue of expressing in formal terms the goals corresponding to the question:
“Will the argument b ∈ A convince the audience?”</p>
      <p>A first simple goal that one can identify is to maximise overall efectiveness:
Goal 1 Find the argument a ∈ Apos for which the following quantity is maximal:
k
X pi · ∥ a∥i.</p>
      <p>i=1
Example, part 6. We have Pik=1 pi · ∥ a1∥i = 0.4∥a1∥1 + 0.6∥a1∥1 ≈ 0.182; Pik=1 pi · ∥ a2∥i =
0.4∥a2∥1 + 0.6∥a2∥1 ≈ 0.200; Pik=1 pi · ∥ a3∥i = 0.4∥a3∥1 + 0.6∥a3∥1 ≈ 0.156. Hence the chosen
argument is a2.</p>
      <p>Another possible goal is to maximise the number of people convinced by an argument a ∈ Apos, that
Goal 2 Find the argument a ∈ Apos for which the following quantity is maximal:
k
X pi · χ (coni(a)),
i=1
χ (φ) =
(1 if φ is true,</p>
      <p>0 if φ is false.
and coni(a) is true if a is able to convince the audience i.</p>
      <p>But what does it mean for an argument to convince an audience? In other words, how do we define
coni?</p>
      <p>One reasonable view is that an argument b convinces the audience i if and only if every attack a on
it is rejected by some condition (that may depend on b and i):</p>
      <p>coni(b) ⇐⇒ ∀a→b argument a is rejected.</p>
      <sec id="sec-4-1">
        <title>How do we determine whether an argument a is rejected?</title>
        <p>For a first tentative answer to this question, we define the following relation:</p>
        <p>a ↠ i b ⇐⇒ (a → b &amp; ∥a∥i ≥ ∥ b∥i) .</p>
        <p>We read a ↠ i b as “argument a defeats argument b according to audience i.” We also write a ↠̸ i b
for ¬(a ↠ i b). The idea, in line with the spirit of Bench-Capon’s approach, is that an argument a can
succesfully attack another argument b according to a given audience i only if b is not preferred by i to
a on the basis of the promoted values.</p>
        <p>Accordingly, the notion of convincing argument can be formalized as follows::</p>
        <p>coni(b) ⇐⇒ ∀a→b a ̸↠ i b ⇐⇒ ∀a→b ∥a∥i &lt; ∥b∥i.</p>
        <p>This formalisation is rather simple and corresponds to strong notion of convincing argument: the
argument is regarded as unquestionable by the audience as it does not receive any efective attack. This
strong notion may however turn out to be inadequate in some cases, as discussed in the next example.
Example, part 7. Let us consider the following fragment of our framework:
a1
b1
c2
According to the formalisation given above, argument b1 is clearly not convincing since it is defeated
by c2. However, a1 is also not convincing, as a1 and b1 are mutually defeated. On the other hand,
intuitively, we are inclined to say that a1 is convincing since its only defeater is defeated by some
undefeatable argument.</p>
        <p>As a solution, we propose to define coni using the grounded semantics [2], which corresponds to
a recursive notion of strong defence. We recall here the basic notions of grounded semantics for the
benefit of readers not already familiar with Dung’s theory of argumentation frameworks. According to
grounded semantics, convincing arguments can be identified as follows. We start with arguments that
are not defeated by any other argument, making them unquestionable. Subsequently, arguments that
are defended by these unquestionable arguments (i.e. arguments whose attackers are in turn attacked by
unquestionable arguments) are also recursively considered unquestionable. More precisely, we propose
the following algorithm:
1. Consider ⟨A, ↠ i⟩ as an argumentation framework.
2. Add undefeated arguments to the grounded extension EGR(A).
3. Remove from the framework A the arguments defeated by elements of EGR(A).
4. If in the modified framework there are undefeated arguments, then go back to Step 2, else exit.
5. Define</p>
        <p>coni(a) ⇐⇒ a ∈ EGR(A).</p>
        <p>It is clear that by using this definition we solve the issue encountered in Example, part 7.
Example, part 8. The outcome of the grounded semantics for audiences 1 and 2 are shown in
Figures 4 and 5, respectively. We observe that arguments a1, a3 are convincing to audience 1, while
arguments a1, a2 are convincing to audience 2. Therefore: Pk
i=1 pi · χ (coni(a1)) = 0.4 · 1 + 0.6 · 1 =
1; Pik=1 pi · χ (coni(a2)) = 0.4 · 0 + 0.6 · 1 = 0.6; Pik=1 pi · χ (coni(a3)) = 0.4 · 1 + 0.6 · 0 = 0.4 Hence
the chosen argument is a1.</p>
        <p>In addressing these considerations, future research may explore refinements or extensions of the
notion of convincing argument. In particular, one could consider the application of Bayesian reasoning,
machine learning techniques, or other methodologies to datasets concerning past Public Interest
Communication campaigns in order to better characterize the notion of convincing argument in
diferent contexts.
c1</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Possible generalisation: list-of-arguments campaigns</title>
      <p>In several cases, campaigns consist of a list of arguments, often presented as a “top ten” or similar
format. This approach aims to distil complex ideas or positions into a concise and easy to remember
set of key points. This strategy aims to ensure clarity and resonance with the audience, making the
message more impactful. By strategically selecting these arguments, campaigns can efectively align
with the values and concerns of their audience, thereby increasing their persuasive influence. The order
of these arguments can be crucial. In some contexts, the first argument holds significant weight as
it captures initial attention and sets the tone for the entire message. Alternatively, the last argument
can leave a lasting impression, often considered the most impactful as it resonates as the final thought
with the audience. Understanding how the sequence influences perception and engagement is vital
for crafting compelling campaigns that efectively communicate their intended message. Moreover,
audience preferences vary regarding the length of argument lists. Some audiences may prefer a shorter
list that highlights key points succinctly, allowing for easy recall and immediate impact. In contrast,
others may favour a longer list that provides comprehensive coverage of diferent aspects of the issue,
appealing to those who seek detailed information and thorough analysis.</p>
      <p>
        Under these considerations, if a list ℓ ∈ A∗ is given4, we have to consider several factors, including
its length and its order. These factors can be taken care of by functions Oi : A∗ → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ]∗ that satisfy
length(Oi(ℓ)) = length(ℓ).
4Given a set S, we denote the collection of lists of elements of S by S∗ . In this respect a real interval is treated as the set of all
numbers included between its endpoints.
c1
∥ · ∥ i :
      </p>
      <p>
        A∗ → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ],
      </p>
      <sec id="sec-5-1">
        <title>Then, as above, a first goal can be</title>
        <p>Goal 1
maximal:</p>
        <p>The j-th element of Oi represents the efect of being in position j on the overall impact of an
argument.</p>
        <p>There are several other properties that we might want to consider, but we don’t address them in this
preliminary work. These include the consistency and cohesion of the list. Consistency refers to whether
the arguments within the list attack each other, potentially leading to internal conflicts that undermine
the overall persuasiveness of the list. Cohesion, on the other hand, pertains to the relationships between
the arguments in the list. Cohesive arguments are those that support and reinforce each other, creating
a unified and convincing narrative. Taking into account both the consistency and cohesion of the
arguments can help ensure that the list is both logically sound and efectively persuasive.</p>
        <p>For each audience i ≤ k, we extend ∥ · ∥ i to lists. We propose:
vuu Xm (Oi(⟨a1, ..., am⟩)j · ∥ aj ∥i)2.
⟨a1, ..., am⟩ 7→ t
j=1
Find ℓ among lists of Apos of a bounded (or fixed) length, such that the following quantity is
k
X pi · ∥ ℓ∥i.</p>
        <p>i=1</p>
        <p>Again, another possible goal is to maximise the number of people convinced
k
X pi · χ (coni(ℓ)).</p>
        <p>i=1
The question is then to define a notion of convincing list.</p>
        <p>An option is to say that a list ℓ convinces i if and only if at least one argument of ℓ convinces i.
Accordingly, taking also into account the impact of the order, a possible preliminary definition is:
coni(⟨b1, ..., bm⟩) ⇐⇒
_
j≤ m</p>
        <p>∀a→bj ∥a∥i &lt; Oi(⟨b1, ..., bm⟩)j · ∥ bj ∥i.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Final remarks</title>
      <p>One important question to consider is whether this approach is meaningful for representing individuals
who are already convinced. These individuals have already accepted the arguments and adopted the
behaviours or policies being promoted. Therefore, it is crucial to evaluate if the communication under
study provides any additional value to this demographic. Specifically, we need to determine if it helps in
reinforcing their beliefs, preventing backsliding, or equipping them to persuade others. Understanding
the impact on already convinced individuals can help refine our strategy to maintain and strengthen
their commitment.</p>
      <p>Another critical aspect to consider is the potential side efects of a campaign. For instance, could
the campaign inadvertently alienate or provoke certain groups? Might it create unintended social or
psychological impacts? Can the campaign, in extreme cases, have an efect opposite to that desired?
(see an example in [5]). Assessing the possible side efects is essential to ensure that the campaign does
not generate negative consequences that outweigh its benefits.</p>
      <p>We want to include a temporal dimension in future phases of this work. In doing so, it becomes crucial
to consider the goals of individual steps within the campaign. Over time, the dynamics of persuasion
change, and strategies must adapt accordingly. For example, if a certain number of people are already
convinced, this can serve as a powerful argument in later stages of the campaign to build momentum
and credibility. Highlighting the growing support can encourage others to join, leveraging social proof
as a persuasive tool. Therefore, a temporal strategy should include phased goals and tailored messages
that evolve with the campaign’s progress.</p>
      <p>It is also important to explore alternative approaches to our current strategy. One such approach
could involve comparing conclusions rather than arguments. Instead of focusing on the individual
arguments and their interrelations, we could evaluate the overall conclusions reached by diferent
groups or individuals. This method might provide a clearer understanding of the end results and help
identify common ground or major points of divergence. By comparing conclusions, we can potentially
streamline the analysis and focus on the most impactful elements of the debate, leading to more efective
communication strategies.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments References</title>
      <p>We acknowledge financial support from MUR project PRIN 2022 EPICA “Enhancing Public Interest
Communication with Argumentation” (CUP D53D23008860006) funded by the European Union - Next
Generation EU.
[2] Dung, P.M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games. Artificial Intelligence, 77:321–357.
[3] Elwood, Z. (2020). The best, most logical arguments against veganism. Medium. https:
//apokerplayer.medium.com/the-best-most-logical-anti-vegan-arguments-477ebcc8aee1
(consulted on July 30, 2024)
[4] Haidt, J., Joseph, C. (2004). Intuitive ethics: how innately prepared intuitions generate culturally
variable virtues. Daedalus. 133(4):55–66
[5] Hornik, R., Jacobsohn, L., Orwin, R., Piesse, A., Kalton, G. Efects of the National Youth
AntiDrug Media Campaign on youths. Am J Public Health. 2008 Dec;98(12):2229-36. doi:
10.2105/AJPH.2007.125849. Epub 2008 Oct 15. PMID: 18923126; PMCID: PMC2636541.
[6] Jacobson, M. (2006). Six Arguments for a Greener Diet. How a More Plant-Based Diet Could Save</p>
      <p>Your Health and the Environment. Center for Science in the Public Interest, Washington.
[7] Kaci, S., van der Torre, L. (2008). Preference-based argumentation: Arguments supporting multiple
values. International Journal of Approximate Reasoning. 48(3):730–751.
[8] Kiesel, J., Alshomary, M., Handke, N., Cai, X., Wachsmuth, H., Stein, B. (2022). Identifying the
Human Values behind Arguments. 4459–4471. https://www.doi.org/10.18653/v1/2022.acl-long.
306.
[9] van der Meer, M., Vossen, P., Jonker, C., Murukannaiah, P. (2023). Do Diferences in Values
Influence Disagreements in Online Discussions?. 15986-16008. https://www.doi.org/10.18653/v1/
2023.emnlp-main.992.
[10] Qiu, L., Zhao, Y., Li, J., Lu, P., Peng, B., Gao, J., Zhu, S. (2022). ValueNet: A New Dataset for Human
Value Driven Dialogue System. Proceedings of the AAAI Conference on Artificial Intelligence . 36.
11183-11191. https://www.doi.org/10.1609/aaai.v36i10.21368.
[11] Schwartz, S.H., Cieciuch, J., Vecchione, M., Davidov, E., Fischer, R., Beierlein, C., Ramos, A.,
Verkasalo, M., Lönnqvist, J.-E., Demirutku, K. et al. (2012). Refining the theory of basic individual
values. Journal of personality and social psychology, 103(4). https://www.doi.org/10.1037/a0029393.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Bench-Capon</surname>
            ,
            <given-names>T.J.M.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Persuasion in Practical Argument Using Value-based Argumentation Frameworks</article-title>
          .
          <source>Journal of Logic and Computation</source>
          .
          <volume>13</volume>
          . 10.1093/logcom/13.3.429.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>