=Paper=
{{Paper
|id=Vol-3915/paper-2
|storemode=property
|title=Plain Statistical Terms to Avoid Prejudicial Rejection of Machine Learning in Territorial Data Analysis (Short paper)
|pdfUrl=https://ceur-ws.org/Vol-3915/Paper-2.pdf
|volume=Vol-3915
|authors=Ermanno Zuccarini
|dblpUrl=https://dblp.org/rec/conf/aiia/Zuccarini24
}}
==Plain Statistical Terms to Avoid Prejudicial Rejection of Machine Learning in Territorial Data Analysis (Short paper)==
Plain Statistical Terms to Avoid Prejudicial Rejection
of Machine Learning in Territorial Data Analysis
Ermanno Zuccarini1
1
Department of Engineering ”Enzo Ferrari” (DIEF), Modena and Reggio Emilia University (UniMoRe), via P. Vivarelli 10,
41125 Modena, Italy
Abstract
This position paper advocates the use of a statistical language to popularize the analysis of territorial
data conducted with machine learning (ML) in smart city projects. The jargon of ML and AI - two terms
often confused in popular culture - includes psychological terms that can easily dazzle and mislead
laypeople. This can lead to alarmist and rejective reactions. Recent studies have analyzed people’s
perceptions of risk, their tendencies to anthropomorphize, and the presence of apocalyptic narratives
about AI. Therefore, in this paper, a return to school logical concepts is recommended, delving into
mathematics and specifically statistics when necessary, to soberly popularize AI branches. This approach
would keep AI itself as a philosophical question. Moreover, the narrative of an ”AI revolution” should
not be emphasized in territorial projects, which should be inclusive. Better is an evolution perspective.
To illustrate this point, an example of a public workshop is sketched in which people with different
generational, social, and work backgrounds focus on the ML risks related to the study of territorial data
in a smart city. Far from sensationalist conditioning, participants glimpse and discuss the not entirely
new nature of these risks, paving the way to progressive learning.
Keywords
AI popularization, AI rejection, Smart cities, Statistics, Machine learning
1. Introduction
A return to the statistical foundations of machine learning (ML), when applied to territorial
analysis projects, should be the criterion for clear, self-explanatory public communication. The
psychologized terminology used among AI experts could in fact backfire, leading to rejection
by the public, opposing political parties, and even members of the working group. The author
of this article perceives this problem as urgent, while currently working with long short-term
memory neural networks on urban heat island data analysis for the town of Carpi. This is the
first module of the Carpi Smart City project[1]. The urban agglomeration is located in Italy in
the central Po plain. Urban and climate data analysis excludes the investigation of individuals
and the automation of decisions. However, any reference to AI could be interpreted as a sign of
more intrusive or uncontrollable initiatives. A challenging context is provided by the recent
and ongoing media coverage of AI. This coverage tends to popularize tools like GPTs and more
generally ML, with an anthropomorphic perception, identifying all these technologies with ”the
AIxIA 2024 Discussion Papers - 23rd International Conference of the Italian Association for Artificial Intelligence, Bolzano,
Italy, November 25–28, 2024
Envelope-Open ermanno.zuccarini@unimore.it (E. Zuccarini)
GLOBE https://it.linkedin.com/in/ermannozuccarini (E. Zuccarini)
Orcid 0000-0003-1611-2448 (E. Zuccarini)
© 2025 Copyright for this paper by its author. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
AI.” The subdivision of AI in its branches is ignored. This whole narrative fosters in the public
a generic fear of uncontrollability and takeover by self-evolving intelligent software, implicitly
evoking scenarios that are contrary to transparency and democratic participation. For these
reasons, there is a need for public communication that references commonly understood school
concepts and does not confuse laypeople with AI jargon. The specialist terminology of ML
should not be discarded, but rather gradually introduced after straightforward mathematical
notions.
2. People’s AI: perceived risks, anthropomorphism and
apocalypse
Recent social research contributes to better shape widespread opinions and sentiments about
AI. Haesevoets et al. [2024] [2] in studies conducted in the UK on public sector decision-making
found that people prefer AI to have some, but significantly less, weight than politicians, citizens,
and human experts. This particularly regards decision roles and ideologically charged decisions.
Bao et al. [2022] [3] studied the segmentation of the US population about the risks and benefits
of AI in general. The negative class (33.3%) was prevalent. Kieslich et al. [2022] [4] analyzed the
different weights given to main ethical aspects of AI: explainability, fairness, security, account-
ability, accuracy, privacy, and machine autonomy. Emerged five groups: ethically concerned
(31.4%), indifferent (24.3%), safety concerned (15.2%), fairness concerned (15.1%), endorsing
human control (14.0%). Dydrov et al. [2023] [5] conducted a key terms survey with Google
Trends and Yandex. The range of markers applied to AI by people comprises: “dehumanizing”,
“godless”, “soulless”, “dangerous”, “dead”, “rational”, “will help”, “create”, “destroy”, “enslave”,
“kill”. This anthropomorphization shows metaphysics in people’s discourses, where the person
perceives himself not equal, but inferior to the algorithm. For Mascareño [2024] [6] climate
change and artificial intelligence have inspired numerous apocalyptic visions of the future.
Both scenarios predict a world-ending outcome unless immediate action is taken. For the
general public, apocalyptic eschatology offers an attractive alternative, blending oversimplified
explanations, calls to action, justification for resource allocation, and even specific outcomes.
3. Position expressed
3.1. The need to understand in a familiar way
What is a smart city project? It could be defined as ”a place where traditional networks and
services are made more efficient with the use of digital solutions for the benefit of its inhabitants
and business”[7]. Regarding ML, it is easy to highlight its statistical nature in projects like that
of Carpi, where typical classification or regression operations are performed on territorial data
using neural networks for predictive purposes. Communication and education initiatives need
to promote this understanding by properly referencing examples from the scientific literature[8].
Basic concepts of symbolic logic and mathematics can enable high school students to practice
small implementations of expert systems and neural networks. But this approach cannot be
expected from a population that has not recently engaged in school studies. However, presenting
the mathematical concepts of ML in popularization and basic courses, with simple application
examples, even graphically animated, is the right way to dispel the myth of a super-intelligence
and provide a realistic understanding of the subject. An objection to this approach could be
raised: the risk of debasing the identity language of the AI community. But the common use of
that jargon is only subordinate to clarifying introductions.
3.2. Feet on the logic-mathematical grounds and AI as a philosophic question
What if, in popularization, AI becomes ”real” through concrete software applications, or worse,
the term AI is used as a kind of commercial label? If AI does not remain a philosophical
question on the horizon, then the applications that are affected can turn delusional when their
logical-mathematical structure is gradually understood. Clarifications about ’narrow’ AI may
sound like an adjustment. However, if a bold popularization effort is made to present these
applications as a continuation of their classic logical backgrounds, their advancements compared
to previous technologies could be better perceived and appreciated as genuine progress. The
commonly used jargon of AI remains, but when intended just as a jargon, it stops to be a source
of understanding problems. Furthermore, a significant portion of the population has a cultural
education that tends to go beyond mere pragmatism. This leads to dissatisfaction with the
perceived equivalence between human and artificial intelligence based on external appearances,
such as the Turing test. The focus shifts instead to the different generating processes rooted in
either biological or electronic nature. The reflection then continues philosophically: does the
human mind, which invents AI, transcend it? If so, in what way?
3.3. The narrative of evolution is more inclusive than that of revolution
The rhetoric of revolution characterizes the narrative of modern and contemporary changes
in many areas: politics, science, industrialization, social habits, and, of course, technology.
It sharply separates a before and after, imposing a new language to describe the innovative
scenario. People are divided between those who embrace the revolution, along with the relative
effort in personal change, and those who remain behind. Similarly, AI language marks a clear
break from the past. However, its rhetoric can be a discouraging factor for people who need to
understand its basic paradigms starting from their educational and common sense background.
Hence, the importance of a historical evolutionary perspective that starts from traditional
disciplines. In conclusion, the feeling of displacement caused by the acceleration of progress
must be avoided in public communication and education if a public project aims to be truly
inclusive toward the population. Moreover, there is a subtle apocalyptic pattern in thinking
about the accelerating future that must be critically analyzed in educational work.
3.4. An idea of public collaborative workshop: giving an interdisciplinary and
historical depth to AI risks in Smart Cities
Shifting from the above general views, here a collaborative public workshop is outlined. It could
be part of educational activities related to territorial data analysis with ML in a smart city project.
This workshop is based on the research ”Artificial Intelligence and Urban Development”[9],
which was commissioned by the European Parliament’s Committee on Regional Development
and published in 2021. The research provides a nonexhaustive list of risks associated with the
deployment of AI in smart cities. Although some of these risks are more relevant to ML in data
analysis, cross-referencing with the remaining risks may prove fruitful.
1. Performance risk: errors, bias, opacity (i.e., ”black-box”), poor explainability, performance
instability;
2. Security risk: cyber-intrusion, open-source software, privacy;
3. Control risk: AI going ”rogue”, inability to control malevolent AI;
4. Ethical risk: ”lack-of-values”, value-alignment, goal-alignment;
5. Economic risk: job-displacement, ”winner-takes-all”, liability, reputational risk;
6. Societal risk: autonomous weapons proliferation, ”intelligence divide”.
The EU document does not differentiate between the various branches of AI or between AI and
traditional computer science, and many of the listed risks are not unique to computer science,
although its power amplifies them. A popularizer might easily present these as completely new
threats of AI, either due to a limited perspective or sensationalist purposes. The main objectives
of the workshop are:
1. Historical continuity vs. novel risks: identify which elements of the listed risks have
historical precedents and which are entirely new.
2. Interdisciplinary analysis: use the diverse generational, social, and professional back-
grounds of the participants to enrich the discussion and multiply perspectives.
3. Expert guidance: employ a blended top-down and bottom-up approach, guided by experts,
to facilitate a comprehensive analysis.
Take, for instance, a possible development of point 1 related to performance risk: errors, bias,
opacity (i.e., ”black-box”), poor explainability, and performance instability. The top-down part
begins with an experienced statistician providing traditional examples of tools and associated
risks. The discussion then transitions to ML, with references to statistical learning. Finally, new
risks are identified and illustrated with examples. In the bottom-up phase, each participant
shares how the presented concepts could relate to their own experiences.
4. Conclusion
AI is often anthropomorphized and mythologized by the general public. Therefore, supplemen-
tary work in communication and education is required, incorporating notions from commonly
known mathematics with in-depth excursions into statistics. This approach could diminish the
dazzling effect of AI’s psychologized jargon and allow people to better focus on the not entirely
new risks of ML with a renewed historical perspective. In the later phases of the Carpi Smart
City project, the above will be tested.
References
[1] Smart City. Carpi città pilota. Ma il 5g non c’entra, 2020. URL: https://ruggeropo.it/
smart-city-carpi-citta-pilota-ma-il-5g-non-centra/.
[2] T. Haesevoets, B. Verschuere, R. V. Severen, A. Roets, How do citizens perceive the use of
artificial intelligence in public sector decisions?, Government Information Quarterly 41
(2004). doi:10.1016/j.giq.2023.101906 .
[3] L. Bao, N. M. Krause, M. N. Klause, D. A. Sheufele, C. D. Wirz, D. Brossard, T. P. Neumann,
M. A. Xenos, Whose ai? how different publics think about ai and its social impacts,
Computers in Human Behavior 130 (2022). doi:10.1016/j.chb.2022.107182 .
[4] K. Kieslich, B. Keller, C. Starke, Artificial intelligence ethics by design. evaluating public
perception on the importance of ethical design principles of artificial intelligence, Big Data
and Society 9 (2022). doi:10.1177/20539517221092956 .
[5] A. A. Dydrov, S. V. Tikhonova, I. V. Baturina, Artificial intelligence: Metaphysics of philistine
discourses, Galactica Media: Journal of Media Studies 5 (2023) 162–178. doi:10.46539/gmd.
v5i1.302 .
[6] A. Mascareño, Contemporary visions of the next apocalypse: Climate change and arti-
ficial intelligence, European Journal of Social Theory 27 (2024) 352–371. doi:10.1177/
13684310241234448 .
[7] Smart City, (n.d.). URL: https://commission.europa.eu/eu-regional-and-urban-development/
topics/cities-and-urban-development/city-initiatives/smart-cities_en.
[8] B. Warner, M. Misra, Understanding neural networks as statistical tools, The American
Statistician (1970). doi:10.1080/00031305.1996.10473554 .
[9] E. P. D. for Structural, C. P. D.-G. for Internal Policies, Artificial intelligence and urban
development, 2021. URL: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/
690882/IPOL_STU(2021)690882_EN.pdf.