A Vector-Based Extension of Value-Based Argumentation for Public Interest Communication Pietro Baroni1 , Giulio Fellin1,∗ , Massimiliano Giacomin1 and Carlo Proietti2 1 University of Brescia, Department of Information Engineering, via Branze 38 - 25123 Brescia, Italy 2 National Research Council of Italy (CNR), Institute for Computational Linguistics “A. Zampolli”, Area di ricerca di Genova, Torre di Francia, Via de Marini 6 - 16149 Genova, Italy Abstract In this paper, we propose a mathematical model to quantify and analyse the impact of public interest communica- tion on target audiences. Building on Bench-Capon’s value-based approach, our model introduces the concept of value vectors to represent a multi-dimensional spectrum of values influencing audience perception and response. By employing vectors, we aim to capture the nuanced interplay between diverse values and the effectiveness of communication strategies. Keywords argumentation, value-based model, vector space 1. Introduction Public Interest Communication plays a crucial role in promoting beneficial behaviours and policies, clarifying their rationale, and ensuring legitimacy among stakeholders, often institutions. Examples of such communication efforts include vaccination campaigns and campaigns advocating for a greener diet. Public Interest Communication aims to convince a general audience to adopt certain behaviours by presenting a variety of supporting arguments. For instance, in a campaign promoting a greener diet, arguments might include: — Eating more fruits and vegetables benefits your health. — Eating more fruits and vegetables potentially benefits animal welfare. — Eating more fruits and vegetables benefits the environment. — Eating more fruits and vegetables benefits the local economy. Despite their importance, public interest campaigns often face significant challenges. Ineffectiveness or backfire effects, caused by poorly targeted communication, are common issues, as evidenced by numerous unsuccessful and costly campaigns. One key problem is that these campaigns target a general audience with diverse knowledge, needs, values, and attitudes. Finding a one-size-fits-all strategy is difficult, which is why campaigns typically leverage multiple motivations, as illustrated by the arguments in the greener diet example. Moreover, public interest campaigns are primarily led by practitioners relying on experience and practical know-how. This approach makes it challenging to analyse the reasons for a campaign’s success or failure. The need for such explanations has led to the emerging field of Public Interest Communication studies. Computational argumentation offers valuable tools for Public Interest Communication by: 1. Reconstructing the general structure of a debate, identifying its basic components (arguments) and their relationships. AI 3 2024 - 8th Workshop on Advances in Argumentation in Artificial Intelligence ∗ Corresponding author. $ pietro.baroni@unibs.it (P. Baroni); giulio.fellin@unibs.it (G. Fellin); massimiliano.giacomin@unibs.it (M. Giacomin); carlo.proietti@ilc.cnr.it (C. Proietti) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings 2. Providing a principled method to assess which arguments are justified in a debate, based on their status (e.g., unattacked or ultimately reinstated by other arguments) as dictated by chosen semantics. This capability allows for an a posteriori analysis of the argumentative content and structure of public interest campaigns, potentially explaining their success or failure. To effectively apply computational argumentation to Public Interest Communication, several chal- lenges must be addressed: 1. Argument Retrieval: How to mine arguments from public campaigns, which requires identifying an annotation format for arguments and their mutual relations, such as attacks and supports. 2. Incomplete Information: Public interest campaign materials often provide limited information for reconstructing and evaluating the generated argumentative content. Arguments are typically from a single source (the issuing organization), with counterarguments and counter-counterarguments often implicit or from indirect sources (e.g., social forum discussions), necessitating abductive reasoning. 3. Diverse Audiences: Different individuals assess argumentative structures in various ways due to differing inferential/epistemic standards and perceptions of defeats. Additionally, real-life arguments have non-inferential components, such as promoted values, affecting their assessment. In this preliminary paper, we place particular emphasis on addressing the challenge of Diverse Audi- ences. Our focus is on developing a mathematical model to quantify and analyse how different inferential and epistemic standards, as well as the influence of non-inferential components like promoted values, affect the assessment of public interest communication among varied target audiences. Arguments can be represented as vectors of coordinates, where each coordinate represents the strength of the argument concerning a particular value or aspect. For each individual or audience, a mapping function assigns weights to arguments, determining which arguments defeat others based on their relative weights. Building on Bench-Capon’s value-based approach [1], our model introduces the concept of value vectors to represent a multi-dimensional spectrum of values influencing audience perception and response. By employing vectors, we aim to capture the nuanced interplay between diverse values and the effectiveness of communication strategies. This approach allows for a more detailed and structured analysis of how different arguments resonate with various segments of the audience, providing insights into the reasons behind the success or failure of public interest campaigns. In [1] each argument is attributed a single value. The hierarchy of values for a given audience determines whether or not attacks from (or to) other arguments are successful. This approach is arguably the most immediate one for representing how a value dimension may be attached to argumentative discourse. Yet, as a matter of fact, arguments may refer to more than a single value, as soon as they are more articulated. In view of this, [7] generalises Bench-Capon’s approach by allowing arguments that refer to multiple values. This is implemented by a function arg attributing a (possibly empty) set of arguments to each value. Therefore, an argument a has multiple values whenever, for two distinct values v1 and v2 , a belongs to the intersection of arg(v1 ) and arg(v2 ); it has no value when a is not a member of arg(v) for any value v. Contrary to [1], in [7] a total ordering among values does not suffice to uniquely determine the defeat relation. To this end it is necessary to define an ordering among sets of values. In [7][Sect. 3.2] three different instances of such an ordering are introduced, deriving them from a primitive ordering among arguments. The framework we introduce is similar in spirit to that of [7] insofar as it attributes a bundle of n features (values) to each argument, specified as a vector in an n-dimensional space. Different from [7], the vectorial representation allows for grading how much a given value attaches to one argument. This, in turn, enables a larger fan of possibilities for defining the relative weight of arguments with respect to a given audience, and therefore the success or failure of an attack. We define such a weight by means of an impact measure (Sect. 3). It is important to stress that this approach allows in principle to encompass different psychological theories of basic human values such as those by Schwartz [11] and Haidt [4]. The paper is organised as follows. Section 2 introduces the proposed formal framework, Section 3 describes the impact measure adopted, while Section 4 discusses the notion of convincing argument. Section 5 explores a generalisation to lists of arguments included in a campaign, while Section 6 concludes the paper. 2. The framework We propose a Value-based Argumentation Framework extending the approach in [1]. In particular, we consider a triple ⟨A, →, Apos ⟩ where ⟨A, →⟩ is an argumentation framework—i.e. A is a set of arguments and → is a binary relation → ⊆ A×A—and Apos ⊆ A. We read a → b as “a attacks b.” The set Apos will be the set of arguments expressing the goals of the considered communication campaign. In practical applications, one starts by identifying Apos and then constructs A by adding possible chains of attacks to elements of Apos . By extension, we also say that a set of arguments “S ⊆ A attacks b” (in symbols S → b) if there is a ∈ S such that a → b. The framework ⟨A, →⟩ is conveniently represented as a directed graph in which the arguments are vertices and edges represent attacks between arguments. Example, part 1. Let’s consider a campaign which aims to encourage a shift towards a greener diet by promoting the environmental and health benefits of reducing meat consumption and increasing plant-based food intake. We can identify the following set for Apos :1 Apos = {a1 : Less chronic disease, better overall health and less foodborne illness, a2 : Better environment: soil, water, air, a3 : Less animal suffering.} To build A, we must consider possible attacks to these arguments, and attacks to them, and so on. For instance, let us take:2 A = Apos ∪ {b1 : Veganism may be unhealthy, e.g. different blood types need different diets, b2 : Morality is relative, b3 : Plant-based agriculture still causes harm, b4 : Not everyone can be vegan, b5 : There are worse things going on in the world, this is a secondary cause, b6 : The world is a tough place, so we have to deal with bad things, c1 : Vegan athletes exist, c2 : Many nutritional experts state that veganism can be healthy and optimal, c3 : The blood-type diet theory has been debunked, c4 : Most people are not moral relativists about unnecessary suffering, c5 : Recognizing that the world is cruel is in not an excuse to do harm, c6 : The goal is to make progress, no one expects the world to become perfect, d1 : Experts are influenced by financial interests and agendas, d2 : Not all experts agree, e1 : There is consensus among independent experts about the health benefits.} Then ⟨A, →, Apos ⟩ can be represented as in Figure 1. The key question is whether a given argument a ∈ Apos will convince the public of our campaign. In order to propose some ways to answer this question, we endow ⟨A, →, Apos ⟩ with some additional structure. 1 The list is taken from [6]. For simplicity, some arguments have been merged. 2 We mostly follow [3]. We acknowledge that this list is not exhaustive; it is provided solely for illustrative purposes. Apos a1 a2 a3 b1 b3 b4 b5 b6 b2 c1 c2 c3 c6 c5 c4 d1 d2 e1 Figure 1: The diagram representing the framework described in Example 1. The set of audiences is a set of the form I = {1, 2, 3, ..., k}, of cardinality k. To each audience i ≤ k we associate a weight pi . Weights satisfy the following conditions: k X ∀i≤k pi ≥ 0, pi = 1. i=1 This takes into account the fact that each i may not represent an individual listener, but a portion of the whole population that has similar values. Let us suppose that we have a fixed (and ordered) set of values (e.g. equality, individual health, collective health, etc.) of cardinality n. In this preliminary paper, we do not argue for a specific set of values, as it falls outside the scope of the present work. Various solutions can be found in the literature, such as in [8, 9, 10, 11]. For illustrative purposes we follow the list of classes of values from [8]: Example, part 2. List of values: 1. Self-direction: thought 11. Tradition 2. Self-direction: action 12. Conformity: rules 3. Stimulation 13. Conformity: interpersonal 4. Hedonism 14. Humility 5. Achievement 15. Benevolence: caring 6. Power: dominance 16. Benevolence: dependability 7. Power: resources 17. Universalism: concern 8. Face 18. Universalism: nature 9. Security: personal 19. Universalism: tolerance 10. Security: societal 20. Universalism: objectivity values arg. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 a1 0 0 0 0 0 0 0 0 1 .6 0 0 0 0 0 0 0 0 0 0 a2 0 0 0 0 0 0 0 0 0 .6 0 0 0 0 0 .7 0 1 0 0 a3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .2 0 .6 .9 0 0 b1 0 0 0 0 0 0 0 0 1 .6 0 0 0 0 0 0 0 0 0 0 b2 .8 .7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .2 0 .4 0 b3 0 0 0 0 0 0 0 0 .6 0 0 0 0 0 0 .7 0 1 0 0 b4 0 0 0 0 0 0 .9 0 0 0 0 0 0 0 0 0 .6 0 0 0 b5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .6 0 0 .7 b6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .7 c1 0 0 0 0 0 0 0 0 .4 0 0 0 0 0 0 0 0 0 0 .9 c2 0 0 0 0 0 0 0 0 .6 .4 0 0 0 0 0 0 0 0 0 1 c3 0 0 0 0 0 0 0 0 .6 .4 0 0 0 0 0 0 0 0 0 1 c4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .6 0 0 0 0 .8 c5 0 0 0 0 0 0 0 0 0 .3 0 0 0 0 .2 .8 .6 0 0 .6 c6 0 0 0 0 .6 0 .6 0 0 .6 0 0 0 0 0 .7 0 0 0 .6 d1 0 0 0 0 0 .3 .4 .3 0 0 0 0 0 0 0 0 0 0 0 .7 d2 .4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .7 e1 .4 0 0 0 0 0 0 0 .6 .4 0 0 0 0 0 0 0 0 0 .8 Figure 2: Tabular representation of the vectors of values of the arguments in the running example. We stress again that these values are given for illustrative purposes. In [1], each argument is associated with an individual value. We believe that this can be refined, as an argument can rely on more than one value, each with a different degree. To this purpose, we define — the space of values as V = [0, 1]n , each dimension of which is associated with the corresponding value; — the value function val : A → V, which assigns each a ∈ A to its vector of values. It may be possible to refine the model by imposing certain restrictions on the vectors resulting from val. Such restrictions should be motivated by further considerations or empirical evidence. Example, part 3. Let us consider for instance the arguments a1 , b1 , c2 from part 1 of our running Example, i.e. a1 : Less chronic disease, better overall health and less foodborne illness. b1 : Veganism may be unhealthy, e.g. different blood types need different diets, c2 : Many nutritional experts state that veganism can be healthy and optimal. Argument a1 relies highly on value 9 – ‘Security: personal’, which includes personal health, and to a lesser extent to value 10 – ‘Security: societal’. It has little to no connection to other values. Therefore, a reasonable vector of values can be val(a1 ) = ⟨0, 0, 0, 0, 0, 0, 0, 0, 1, 0.6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0⟩. Notice that b1 also relies on the very same values—although with a different conclusion—, so val(b1 ) = val(a1 ). Argument c2 instead relies solely on value 20 – ‘Universalism: objectivity’, hence val(c2 ) = ⟨0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1⟩. Figure 2 presents the vectors of values for all the arguments considered in our running example. Each audience i ≤ k will have their own preferences among values. We want to represent this by introducing the audience specific value function asv : I → V , which assigns to each audience i a vector whose jth entry represents the importance that audience i gives to value j. As above, further considerations or empirical evidence may require imposing some restrictions on the vectors resulting from asv. Example, part 4. Suppose that I = {1, 2}, and assign to asv the following values: values i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 .7 .6 .4 .3 .5 .4 .3 .4 .7 .8 .3 .4 .5 .6 .8 .7 .9 .8 .8 .7 2 .7 .8 .3 .2 .7 .6 .6 .5 .7 .6 .8 .8 .7 .5 .6 .7 .5 .5 .6 .6 We also set p1 = .4 and p2 = .6. We have defined all the primitive structures that we expect the framework to have. We want to answer the question from above: given an argument a ∈ Apos , will a convince the audience? To this purpose, we introduce in next section an impact measure which aims to capture how much each argument is effective for a given audience. 3. The impact measure For each audience i ≤ k we introduce their impact measure as a function ∥ · ∥i : A → [0, 1]. We want it to assign to each argument a the impact that a has on i, by evaluating the values on which a relies and the importance that these values have for i. In particular, we want to ensure that if an argument a relies on a value that is important to i, then a will have a high impact on i. To this end, we require that this can be written as a composition val s ∥ · ∥i : A −−→ V ⊆ Rn − →i R In words, given an argument a, its impact on audience i is determined on the basis of its value of vectors val(a) which is then synthesised into a single real number, measuring the impact, through an audience specific function si . The function si is required to satisfy the following properties: — It is a seminorm, i.e. for any ⃗x, ⃗y ∈ Rn and λ ∈ R we have3 si (⃗x + ⃗y ) ≤ si (⃗x) + si (⃗y ), (Subadditivity) si (λ · ⃗x) = |λ| · si (⃗x). (Absolute homogeneity) — It satisfies monotonicity: Consider ⃗x = ⟨x1 , ..., xn ⟩, ⃗y = ⟨y1 , ..., yn ⟩. If |x1 | ≤ |y1 |, ..., |xn | ≤ |yn |, then si (⃗x) ≤ si (⃗y ). — The restriction si |V has codomain contained in [0, 1]. In particular, monotonicity ensures that if an argument a relies on values which are of higher importance to i, then a will have a higher impact on i. Seminorms satisfy some very nice properties. For instance: Proposition 3.1. Let si : Rn → R be a seminorm. Then: 3 | · | denotes the absolute value. 1. si (⃗0) = 0. 2. Nonnegativity: For every ⃗x ∈ Rn , we have si (⃗x) ≥ 0. Proof. 1. Absolute homogeneity implies si (⃗0) = si (0 · ⃗x) = 0 · si (⃗x) = 0. 2. Let ⃗x ∈ Rn . Absolute homogeneity implies si (−⃗x) = si ((−1) · ⃗x) = | − 1| · si (⃗x) = si (⃗x). Subadditivity now implies 0 = si (⃗0) = si (⃗x + (−⃗x)) ≤ si (⃗x) + si (−⃗x) = si (⃗x) + si (⃗x) = 2 si (⃗x). which, in turn, implies 0 ≤ si (⃗x). In this paper we consider the following definition for the impact measure: ∥ · ∥i : A → [0, 1], v u1 n u X a 7→ t (asv(i)j · val(a)j )2 . n j=1 If we use ∥ · ∥ to denote the Euclidean norm and ⊙ to denote the Hadamard product (i.e. the component- wise product), then we can write 1 ∥a∥i = √ ∥ asv(i) ⊙ val(a)∥. n We want to show that this function indeed satisfies the desired properties: Proposition 3.2. The function si : Rn → R 1 ⃗x 7→ √ ∥ asv(i) ⊙ ⃗x∥ n is a monotonic seminorm, and the restriction si |V has codomain contained in [0, 1]. Proof. First, recall that the Euclidean norm is a monotonic seminorm. In order to prove subadditivity: 1 1 si (⃗x + ⃗y ) = √ ∥ asv(i) ⊙ (⃗x + ⃗y )∥ = √ ∥(asv(i) ⊙ ⃗x) + (asv(i) ⊙ ⃗y )∥ n n 1 1 ≤ √ ∥ asv(i) ⊙ ⃗x∥ + √ ∥ asv(i) ⊙ ⃗y ∥ = si (⃗x) + si (⃗y ). n n In order to prove absolute homogeneity: 1 1 si (λ⃗x) = √ ∥ asv(i) ⊙ λ⃗x∥ = √ ∥λ(asv(i) ⊙ ⃗x)∥ n n 1 = |λ| √ ∥ asv(i) ⊙ ⃗x∥ = |λ| si (⃗x) n In order to prove monotonicity, consider ⃗x = ⟨x1 , ..., xn ⟩, ⃗y = ⟨y1 , ..., yn ⟩ such that |x1 | ≤ |y1 |, ..., |xn | ≤ |yn |. Then: 1 1 si (⃗x) = √ ∥ asv(i) ⊙ ⃗x∥ ≤ √ ∥ asv(i) ⊙ ⃗y ∥ = si (⃗y ). n n In order to prove that the restriction si |V has codomain contained in [0, 1]: 1 1 1 √ 0 = √ ∥⃗0∥ ≤ si (⃗x) ≤ √ ∥⃗1∥ = √ n = 1. n n n arg. a1 a2 a3 b1 b2 b3 b4 b5 b6 c1 c2 c3 c4 c5 c6 d1 d2 e1 ∥ · ∥1 .190 .236 .204 .190 .177 .236 .135 .163 .110 .154 .196 .196 .165 .208 .196 .119 .126 .183 ∥ · ∥2 .176 .176 .124 .176 .186 .176 .138 .115 .094 .136 .172 .172 .134 .170 .201 .120 .113 .165 Figure 3: Values of the impact measures of the arguments in the example for audiences 1 and 2. Example, part 5. In Figure 3 we approximate the values of the impact function in our running example. Beside the one proposed here, there are several possible ways to define an impact function, and em- pirical evidence will be necessary to evaluate and compare these different definitions. In particular, one possibility is to consider additional features of arguments besides values, for instance their presentation form, their emotional aspects an so on. A simple way to capture these additional features would be including in the computation a multiplicative factor given by a function E : I × A → [0, 1]: v u1 n u X ∥ · ∥i : A → [0, 1], a 7→ E(i, a) · t (asv(i)j · val(a)j )2 . n j=1 Further investigation in this direction is left to future work. 4. Convincing arguments In this section we explore the issue of expressing in formal terms the goals corresponding to the question: “Will the argument b ∈ A convince the audience?” A first simple goal that one can identify is to maximise overall effectiveness: Goal 1 Find the argument a ∈ Apos for which the following quantity is maximal: k X pi · ∥a∥i . i=1 Example, part 6. We have ki=1 pi · ∥a1 ∥i = 0.4∥a1 ∥1 + 0.6∥a1 ∥1 ≈ 0.182; ki=1 pi · ∥a2 ∥i = P P 0.4∥a2 ∥1 + 0.6∥a2 ∥1 ≈ 0.200; ki=1 pi · ∥a3 ∥i = 0.4∥a3 ∥1 + 0.6∥a3 ∥1 ≈ 0.156. Hence the chosen P argument is a2 . Another possible goal is to maximise the number of people convinced by an argument a ∈ Apos , that is: Goal 2 Find the argument a ∈ Apos for which the following quantity is maximal: k X pi · χ(coni (a)), i=1 where ( 1 if φ is true, χ(φ) = 0 if φ is false. and coni (a) is true iff a is able to convince the audience i. But what does it mean for an argument to convince an audience? In other words, how do we define coni ? One reasonable view is that an argument b convinces the audience i if and only if every attack a on it is rejected by some condition (that may depend on b and i): coni (b) ⇐⇒ ∀a→b argument a is rejected. How do we determine whether an argument a is rejected? For a first tentative answer to this question, we define the following relation: a ↠i b ⇐⇒ (a → b & ∥a∥i ≥ ∥b∥i ) . We read a ↠i b as “argument a defeats argument b according to audience i.” We also write a ̸↠i b for ¬(a ↠i b). The idea, in line with the spirit of Bench-Capon’s approach, is that an argument a can succesfully attack another argument b according to a given audience i only if b is not preferred by i to a on the basis of the promoted values. Accordingly, the notion of convincing argument can be formalized as follows:: coni (b) ⇐⇒ ∀a→b a ̸↠i b ⇐⇒ ∀a→b ∥a∥i < ∥b∥i . This formalisation is rather simple and corresponds to strong notion of convincing argument: the argument is regarded as unquestionable by the audience as it does not receive any effective attack. This strong notion may however turn out to be inadequate in some cases, as discussed in the next example. Example, part 7. Let us consider the following fragment of our framework: a1 b1 c2 According to the formalisation given above, argument b1 is clearly not convincing since it is defeated by c2 . However, a1 is also not convincing, as a1 and b1 are mutually defeated. On the other hand, intuitively, we are inclined to say that a1 is convincing since its only defeater is defeated by some undefeatable argument. As a solution, we propose to define coni using the grounded semantics [2], which corresponds to a recursive notion of strong defence. We recall here the basic notions of grounded semantics for the benefit of readers not already familiar with Dung’s theory of argumentation frameworks. According to grounded semantics, convincing arguments can be identified as follows. We start with arguments that are not defeated by any other argument, making them unquestionable. Subsequently, arguments that are defended by these unquestionable arguments (i.e. arguments whose attackers are in turn attacked by unquestionable arguments) are also recursively considered unquestionable. More precisely, we propose the following algorithm: 1. Consider ⟨A, ↠i ⟩ as an argumentation framework. 2. Add undefeated arguments to the grounded extension EGR (A). 3. Remove from the framework A the arguments defeated by elements of EGR (A). 4. If in the modified framework there are undefeated arguments, then go back to Step 2, else exit. 5. Define coni (a) ⇐⇒ a ∈ EGR (A). It is clear that by using this definition we solve the issue encountered in Example, part 7. Example, part 8. The outcome of the grounded semantics for audiences 1 and 2 are shown in Figures 4 and 5, respectively. We observe that arguments P a1 , a3 are convincing to audience 1, while arguments a1 , a2 are convincing to audience 2. Therefore: ki=1 pi · χ(coni (a1 )) = 0.4 · 1 + 0.6 · 1 = 1; ki=1 pi · χ(coni (a2 )) = 0.4 · 0 + 0.6 · 1 = 0.6; ki=1 pi · χ(coni (a3 )) = 0.4 · 1 + 0.6 · 0 = 0.4 Hence P P the chosen argument is a1 . In addressing these considerations, future research may explore refinements or extensions of the notion of convincing argument. In particular, one could consider the application of Bayesian reasoning, machine learning techniques, or other methodologies to datasets concerning past Public Interest Communication campaigns in order to better characterize the notion of convincing argument in different contexts. Apos a1 a2 a3 b1 b3 b4 b5 b6 b2 c1 c2 c3 c6 c5 c4 d1 d2 e1 Figure 4: The diagram representing the framework ⟨A, ↠1 ⟩ of audience 1 in Example 8. The grounded extension is emphasised by circles. 5. Possible generalisation: list-of-arguments campaigns In several cases, campaigns consist of a list of arguments, often presented as a “top ten” or similar format. This approach aims to distil complex ideas or positions into a concise and easy to remember set of key points. This strategy aims to ensure clarity and resonance with the audience, making the message more impactful. By strategically selecting these arguments, campaigns can effectively align with the values and concerns of their audience, thereby increasing their persuasive influence. The order of these arguments can be crucial. In some contexts, the first argument holds significant weight as it captures initial attention and sets the tone for the entire message. Alternatively, the last argument can leave a lasting impression, often considered the most impactful as it resonates as the final thought with the audience. Understanding how the sequence influences perception and engagement is vital for crafting compelling campaigns that effectively communicate their intended message. Moreover, audience preferences vary regarding the length of argument lists. Some audiences may prefer a shorter list that highlights key points succinctly, allowing for easy recall and immediate impact. In contrast, others may favour a longer list that provides comprehensive coverage of different aspects of the issue, appealing to those who seek detailed information and thorough analysis. Under these considerations, if a list ℓ ∈ A∗ is given4 , we have to consider several factors, including its length and its order. These factors can be taken care of by functions Oi : A∗ → [0, 1]∗ that satisfy length(Oi (ℓ)) = length(ℓ). 4 Given a set S, we denote the collection of lists of elements of S by S ∗ . In this respect a real interval is treated as the set of all numbers included between its endpoints. Apos a1 a2 a3 b1 b3 b4 b5 b6 b2 c1 c2 c3 c6 c5 c4 d1 d2 e1 Figure 5: The diagram representing the framework ⟨A, ↠2 ⟩ of audience 2 in Example 8. The grounded extension is emphasised by circles. The j-th element of Oi represents the effect of being in position j on the overall impact of an argument. There are several other properties that we might want to consider, but we don’t address them in this preliminary work. These include the consistency and cohesion of the list. Consistency refers to whether the arguments within the list attack each other, potentially leading to internal conflicts that undermine the overall persuasiveness of the list. Cohesion, on the other hand, pertains to the relationships between the arguments in the list. Cohesive arguments are those that support and reinforce each other, creating a unified and convincing narrative. Taking into account both the consistency and cohesion of the arguments can help ensure that the list is both logically sound and effectively persuasive. For each audience i ≤ k, we extend ∥ · ∥i to lists. We propose: v um uX ∥ · ∥i : ∗ A → [0, 1], ⟨a1 , ..., am ⟩ 7→ t (Oi (⟨a1 , ..., am ⟩)j · ∥aj ∥i )2 . j=1 Then, as above, a first goal can be Goal 1 Find ℓ among lists of Apos of a bounded (or fixed) length, such that the following quantity is maximal: Xk pi · ∥ℓ∥i . i=1 Again, another possible goal is to maximise the number of people convinced Goal 2 Find ℓ among lists of Apos of a bounded (or fixed) length, such that the following quantity is maximal: Xk pi · χ(coni (ℓ)). i=1 The question is then to define a notion of convincing list. An option is to say that a list ℓ convinces i if and only if at least one argument of ℓ convinces i. Accordingly, taking also into account the impact of the order, a possible preliminary definition is: _ coni (⟨b1 , ..., bm ⟩) ⇐⇒ ∀a→bj ∥a∥i < Oi (⟨b1 , ..., bm ⟩)j · ∥bj ∥i . j≤m 6. Final remarks One important question to consider is whether this approach is meaningful for representing individuals who are already convinced. These individuals have already accepted the arguments and adopted the behaviours or policies being promoted. Therefore, it is crucial to evaluate if the communication under study provides any additional value to this demographic. Specifically, we need to determine if it helps in reinforcing their beliefs, preventing backsliding, or equipping them to persuade others. Understanding the impact on already convinced individuals can help refine our strategy to maintain and strengthen their commitment. Another critical aspect to consider is the potential side effects of a campaign. For instance, could the campaign inadvertently alienate or provoke certain groups? Might it create unintended social or psychological impacts? Can the campaign, in extreme cases, have an effect opposite to that desired? (see an example in [5]). Assessing the possible side effects is essential to ensure that the campaign does not generate negative consequences that outweigh its benefits. We want to include a temporal dimension in future phases of this work. In doing so, it becomes crucial to consider the goals of individual steps within the campaign. Over time, the dynamics of persuasion change, and strategies must adapt accordingly. For example, if a certain number of people are already convinced, this can serve as a powerful argument in later stages of the campaign to build momentum and credibility. Highlighting the growing support can encourage others to join, leveraging social proof as a persuasive tool. Therefore, a temporal strategy should include phased goals and tailored messages that evolve with the campaign’s progress. It is also important to explore alternative approaches to our current strategy. One such approach could involve comparing conclusions rather than arguments. Instead of focusing on the individual arguments and their interrelations, we could evaluate the overall conclusions reached by different groups or individuals. This method might provide a clearer understanding of the end results and help identify common ground or major points of divergence. By comparing conclusions, we can potentially streamline the analysis and focus on the most impactful elements of the debate, leading to more effective communication strategies. Acknowledgments We acknowledge financial support from MUR project PRIN 2022 EPICA “Enhancing Public Interest Communication with Argumentation” (CUP D53D23008860006) funded by the European Union - Next Generation EU. References [1] Bench-Capon, T.J.M. (2002). Persuasion in Practical Argument Using Value-based Argumentation Frameworks. Journal of Logic and Computation. 13. 10.1093/logcom/13.3.429. [2] Dung, P.M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77:321–357. [3] Elwood, Z. (2020). The best, most logical arguments against veganism. Medium. https: //apokerplayer.medium.com/the-best-most-logical-anti-vegan-arguments-477ebcc8aee1 (con- sulted on July 30, 2024) [4] Haidt, J., Joseph, C. (2004). Intuitive ethics: how innately prepared intuitions generate culturally variable virtues. Daedalus. 133(4):55–66 [5] Hornik, R., Jacobsohn, L., Orwin, R., Piesse, A., Kalton, G. Effects of the National Youth Anti- Drug Media Campaign on youths. Am J Public Health. 2008 Dec;98(12):2229-36. doi: 10.2105/A- JPH.2007.125849. Epub 2008 Oct 15. PMID: 18923126; PMCID: PMC2636541. [6] Jacobson, M. (2006). Six Arguments for a Greener Diet. How a More Plant-Based Diet Could Save Your Health and the Environment. Center for Science in the Public Interest, Washington. [7] Kaci, S., van der Torre, L. (2008). Preference-based argumentation: Arguments supporting multiple values. International Journal of Approximate Reasoning. 48(3):730–751. [8] Kiesel, J., Alshomary, M., Handke, N., Cai, X., Wachsmuth, H., Stein, B. (2022). Identifying the Human Values behind Arguments. 4459–4471. https://www.doi.org/10.18653/v1/2022.acl-long. 306. [9] van der Meer, M., Vossen, P., Jonker, C., Murukannaiah, P. (2023). Do Differences in Values Influence Disagreements in Online Discussions?. 15986-16008. https://www.doi.org/10.18653/v1/ 2023.emnlp-main.992. [10] Qiu, L., Zhao, Y., Li, J., Lu, P., Peng, B., Gao, J., Zhu, S. (2022). ValueNet: A New Dataset for Human Value Driven Dialogue System. Proceedings of the AAAI Conference on Artificial Intelligence. 36. 11183-11191. https://www.doi.org/10.1609/aaai.v36i10.21368. [11] Schwartz, S.H., Cieciuch, J., Vecchione, M., Davidov, E., Fischer, R., Beierlein, C., Ramos, A., Verkasalo, M., Lönnqvist, J.-E., Demirutku, K. et al. (2012). Refining the theory of basic individual values. Journal of personality and social psychology, 103(4). https://www.doi.org/10.1037/a0029393.