Enhancing learning through user awareness: the monitoring of researches as a resource-based strategy Giulia Caso1,2 , Angelo Corallo1,3 and Marco Biagini4 1 University of Salento, Lecce, 73100, Italy 2 CRLab - Cybersecurity Research Lab, University of Salento, Lecce, 73100, Italy 3 Technical Committee Chairperson dell’European Committee for Standardization 353 (CEN/TC 353) - Standardisation of Learning Technologies (EdTech) 4 NATO Modelling & Simulation Center of Excellence (M&S COE), Italy Abstract The advent of the digital age made content consumption affordable for a broader population. Therefore, proper information management plays a fundamental role, as data and information processing drive strategic knowledge on a wider scale. The aim of this work is, therefore, to study decision-making dynamics during search activities for students in academia, individuals, and employees in organizations, for a user-oriented solution that supports the behavioural analysis during information searches on the Internet, increasing awareness of limited cognitive and processing resources through the design of a dashboard. Potential benefits coming from this approach to misinformation mitigation are discussed. Keywords Misinformation, Attention resource, User awareness, Learning process 1. Introduction The rapid propagation of misinformation was identified by the World Economic Forum as a primary threat and one of the ten largest trends in society, at the same time emphasising the difficulty in mitigating the phenomenon [1]. Although misinformation is not a new concept, concerns have grown considerably with the advent of the Internet and social media: their effect on the rapidity and width of information dynamics (regardless of its accuracy or truthfulness) brings the “information society” to face new critical challenges. This motivates further investi- gation, aiming at a better and harmonised understanding of the emergence of misinformation within search and information retrieval, then trying to reduce its impact [2]. This paper adopts a user-centered perspective, focusing on the limited cognitive and processing resources available to (a group of) individuals, designing a proposal that can support the mitigation of misinfor- mation phenomena based on proactive user involvement. We focus on measurable quantities that can inform us of the effort and resources invested during the information selection process, with active involvement and awareness of users on the attention resource as a distinguishing feature. The objective is to support the management of misinformation phenomena without relying on its multiple definitions arising from the analysis of the state-of-the-art. Indeed, IS-EUD 2023: 9th International Symposium on End-User Development, 6-8 June 2023, Cagliari, Italy $ giulia.caso@unisalento.it (G. Caso); angelo.corallo@unisalento.it (A. Corallo) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) they would require multiple specific tools to address the different facets of misinformation, but they should also take into account the cognitive effort for fact-checking and misinformation detection. In combination with selection biases stimulated by current monetisation methods (click-baiting), which may strengthen prejudices and misconceptions, such factors can be prop- erly controlled by taking the user’s behaviour as the focal point of the monitoring activities. The aforementioned objective has multiple advantages in the field of cyber-education: the monitoring and supervision of actions and behaviors in the exploration of digital information allows the user-student to trace the objectives achieved and compare them with the expected results, identifying gaps and actions to be undertaken to enhance or accelerate the learning process, paving the way for a guided improvement of knowledge acquisition from web resources. Second, proactively strengthening information browsing behaviors can mitigate potential weak- nesses due to the human factor, which is a critical issue in cybersecurity. Indeed, the proposed approach maintains the focus on achieving the research objectives in an efficient and systematic way, also developing an understanding of possible improvement strategies, becoming aware of the possible cognitive biases that one may involuntarily incur. The approach can therefore support cybersecurity education for many sectors, from individuals to enterprises, while also highlighting benefits for academia; it is therefore a general-purpose solution and the end users can be multiple (organisations, private individuals, students, etc.) therefore in this document, unless otherwise specified, referring to the generic user, all possible categories. 1.1. Structure of the Paper The paper is organised as follows: Section 2 will examine the state-of-the-art based on the results of a systematic literature review, also showing the way in which concepts present in the literature provide an interesting basis for our proposal, exploiting Simon’s notions of bounded rationality and the economics of attention. As a starting point for the implementation of the dashboard, we will focus in particular on the attention resource, together with the enabling technologies for its measurement, which are the focus of Section 3. This provides an opportunity to implement the hypothesis of the way the user interacts with the dashboard. We conclude by discussing the final considerations and future work in 4. 2. State of the Art and Theoretical Background Using a systematic literature review approach [3], we carried out the analysis of relevant information sources from the scientific literature on the following research questions: (a) understanding the meaning(s) of misinformation; (b) background motivation and relevance of this topic for the present discussion; (c) characterization of the phenomenon; (d) evaluation of existing tools concerning the study of decision-making related to information sources. This provided us with theoretical evidence to support the different aspects underlying the proposed idea, starting with the current importance of the topic under consideration, emphasizing the multiplicity of taxonomies related to misinformation that lead to a non-unique reference and an approach that is often not suitable to handle the generalization of the phenomenon, considering the innovation of the user-based methodology provided by an approach such as the one proposed, and analysing the existing technological tools and their possible correlation with the proposal developed. Therefore, through active monitoring of the information search process, we aim to make users an active and conscious part of it by identifying specific resources in the dashboard such as search time, search cost, and, in particular, attention-related indicators that are informative about the efficiency of the learning process. Specifically, the attention construct can be evaluated based on user behaviour in web browsing and navigation using already available tools (keyword/mouse tracking, eye-tracking). The research was carried out on both indexed databases (Scopus1 , and ScienceDirect2 ) and open repositories (ResearchGate3 ) according to specific criteria that would provide a useful pool of documents to provide adequate and comprehensive answers to the above-mentioned research questions. The selection process resulted in the collection of 24 papers ([4, 5, 6, 1, 7, 8, 9, 10, 11, 12, 13, 14, 2, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]). The complete treatment of the research process is beyond the scope of this work: however, it is worth emphasizing that, within the analyzed documents, the most debated field appears to be that relating to the inputs and outputs relating to the relationship between individuals, cognitions and truthfulness news, demonstrating how disinformation is often a phenomenon also guided by psychology; we will focus on this area. It is crucial to highlight a premise about the terms disinformation and misinformation, as they will be used extensively in this paper: we will distinguish misinformation as accidental falsehood and disinformation as deliberate falsehood [26]. According to the intentions of this proposal, the information acquired and shared by users can be considered both misinformation and disinformation, as it can be learnt and shared both unintentionally and deliberately; the two terms will therefore be used as synonyms in the following. The details of the review highlighted how disinformation detection tools tend to make the user passive: solutions based solely on ground reality or external evidence do not allow the user to have an informed and active approach. This supports the hypothesis that efforts (e.g. human cognitive resources) to support users in fact-checking should also be considered in the disinformation context. 2.1. Theoretical background In order to propose a possible solution, we will use a number of notions found in the literature, among which the most relevant are H. Simon’s concepts of limited rationality, economy of attention, and satisficing (the combination of the words satisfy and suffice) [27, 28, 29]. Specifi- cally, the first and third concepts are almost linked, referring to the limited cognitive resources available to users that, due to internal and external limitations related to the collection and processing of information, allow them to reach satisfactory (even if not optimal) solutions, often through the use of ’mental shortcuts’ called heuristics. The second concept, instead, refers to the problems related to the abundance of information in modern society and the ability to process it, elements that may lead to user misinformation. It should be noted that the notion of bounded rationality has been widely explored in the decision science literature, but the notion of bounded resources can also be explored in different statistical frameworks, e.g., in the investigation of inequivalent descriptions of statistical systems [30]. 1 https://www.scopus.com/ 2 https://www.sciencedirect.com/ 3 https://www.researchgate.net/ 3. Model Proposal We hypothesize that analysing users’ resource utilization and search performance can mitigate information overload by improving the quality of information management, increasing trust in the information sphere, and combating the misuse of heuristics. In order to limit the computa- tional and memory constraints in information processing by users, the transfer of information (acquired and shared) by users defines the boundaries of the search space: it depends on the information acquisition process and the user himself, in relation to the resources used during the process, so that further searches (queries and information acquisition) may be considered unnecessary or, conversely, necessary. In the latter case, satisfaction determines whether such additional searches can be carried out. The context of navigating information sources, if not adequately monitored, can foster the consolidation of cognitive biases (mainly selection bias), i.e., constructs based not only on critical judgment but also on heuristic strategies to avoid the effort of knowledge revision or additional information processing (see also [31]). The proposed model thus starts from a representational process applied to the management of decision-making during research: once the resources to be invested at the beginning of the search are initialized, chosen according to what is intended to be used as a reference value (such as willingness to explore fee sources), it is possible to monitor and update them during the process, giving the user the opportunity to receive intermediate and final feedback on their use and the performance of the search conducted (in terms of sources examined, time spent in exploring them, type of items displayed, etc.). The designed dashboard is referred to as Search Performance Dashboard (SPD). The focus is on increasing users’ "awareness", central to the phenomenon linked to consumption and the spread of disinformation, with the aim of providing an active approach with iterations, feedback and continuous monitoring within the research process information to mitigate inaccuracies due to human factors (e.g., inattention, bias) and reduce the spread of dis- information related to the acquisition and dissemination of inaccurate or unreliable information. Analysis of the resources invested, coupled with the overall view of the search results in the SPD provides the element to capture the need to further integrate the searches and modify or update them dynamically, including the resolution of potential inconsistencies. Further improvements are derived for both private users and organizations/academia in counteracting human factor problems in cybersecurity [32]: it is possible to study positive correlations with a decrease in attention and, consequently, with an increase in risks from untrusted web interactions (see, for example, [33] for a new class of such attacks). Identifiable users are, in fact, private users, referred to as basic users, and organisations/academies, referred to as advanced users, who can use the dashboard in free mode, obtaining only the data visualization elements to examine the search results completely, or in paid mode to monitor the state of attention. Providing management and monitoring tools to support individuals can prevent the consolidation of biases that can undermine business for organizations, while also facilitating statistical analysis of groups of individuals (such as software development teams [34] or academic programmes [35]). It is emphasized that the idea presented in this work concerns an experimental approach, emphasizing the innovation of the user-oriented proposal. At the moment, therefore, a practical case study is not available, but this proposal may be used in future works, enabling multiple possibilities of and implementations: as an example, it is proposed how SPD could be used in an academic environment within a class or an educational course to monitor the active learning of students during a research or study, evaluating the results obtained with and without the use of the dashboard, and proposing surveys that could confirm its benefits. 3.1. Attention resource We focus on the attention resource as an element of particular interest, representing a latent construct whose manifest variables and related measurement tools are the subject of current research. The technological feasibility of the proposed solution related to the attention compo- nents is a critical point in the implementation of a solution aimed at quantifying the attention resource and feeding it into the proposed dashboard. Therefore, focusing on the implementation of this resource, the technologies for studying the degree of attention are mainly divided into (i.) monitoring of online behavior and actions (browsing history, pointer/keyboard use); (ii.) sensors for measuring physiological and behavioral parameters of the human body. We focus on eye tracking (ET), resulting in a technology for which there is a correlation with the assessment of attention, performed with a user-friendly approach [36]. It is an appropriate tool to obtain feedback on users’ attention status [37] since visual allocation is the basis of the processes that guide the selection of information. According to the premotor theory of attention, there is a high correlation of similarity between the mechanism related to spatial attention and that responsible for programming eye saccades [38], thus obtaining information about where and when gaze is directed through tools such as heat maps [39]. We point out that recent studies have begun to address the use of eye-tracking to explore the detection of fake news [40]: this supports the use of eye-tracking technologies (webcams and image processing software) in relation to a topic of interest for the present work. The ET represents a valuable technological advancement to provide the analysis of the attention resource in the SPD. In Figure 1a we show a schematic representation of the SPD, while Figure 1b presents a proposed visualization design for the final dashboard; it represents only an exemplary mock-up of the dashboard, allowing the implementation of what is displayed (also based on initialized resources) as future work. The process in Figure 1a describes how the user in the initial phase initializes the resources to invest in research which, in the intermediate phase will be analyzed and monitored (using techniques such as eye-tracking for attention), giving the user the possibility to obtain intermediate dash- boards, set up the resources again, and be warned of their excessive consumption compared to what was initialized. In the final phase, once the search is complete, the SPD provided will allow the user to decide whether to end the search or resume it to make corrections or additions. The aforementioned resources (and the subsequent dashboard) can be implemented on the basis of a preliminary evaluation of the elements to be used, based on the purpose of the users and other elements that go beyond the scope of this document; in future developments it can certainly be an element to consider and develop. 4. Conclusions and future works The dashboard can be a tool to mitigate information overload and a valuable decision support by providing indicators of resources spent during a research process: this continuous monitoring of information can be used to highlight anomalies within research processes and behaviors atypical, leading to the awareness of individual users, organizations/academies (through statistical analyses) and, therefore, favoring the mitigation of the misinformation phenomenon. We also focused on monitoring attention resources for their relevance in educational processes in the digital age, being crucial for information retrieval and to avoid phenomena such as mind wandering, i.e. the tendency to mentally wander during the research task, which it can cause errors and prevent you from finding the information you want. It is therefore worth emphasizing the importance of monitoring students’ attention while gathering information: identifying moments when they struggle to concentrate can develop awareness, tools or models to improve the search process and the security of solutions. This provides a basis for designing measurement models for abstract constructs such as attention resources and cognitive biases; these models need validated proxies to extract the knowledge. Along with adequate data, appropriate methods are therefore needed for their analysis: in future research, we envision the adoption of multivariate statistical techniques such as Structural Equation Modeling (SEM) [41], as well as entropy-based approaches that have proven useful in the analysis of perception and expectations [42] and in the graphical and geometric characterization of information quantities [43, 44]. One aspect worth exploring for the technological implementation of the dashboard is the application of tools for network visualization and cluster or community identification to facilitate the interpretation of data during the search process. Just like the proposed SPD, these graphical representations aim to mitigate the “soft cyber-risk” of information products, parallel to existing visual tools to mitigate privacy and security issues in software products [45]. The application of this proposal in the educational sphere can promote the awareness of user-students and the monitoring of their actions during the learning process, also supporting the design of cybersecurity-related training programmes for academic-users [35]. In this regard, the proposed SPD design can be considered as a basis for both a conceptual framework and a prototype implementation, which can take advantage of existing hardware for image acquisition and software for signal processing while providing supplementary functionalities for existing digital learning systems. (a) Process diagram, with focus on attention resource (b) Example of visualization design Figure 1: Proposed design for the SPD - a) Process diagram of initialization resources process, with focus on the attention one; b) Example of final SPD visualization mockup References [1] E. Kapantai, A. Christopoulou, C. Berberidis, V. Peristeras, A systematic literature review on disinformation: Toward a unified taxonomical framework, New media & society 23 (2021) 1301–1326. [2] C. Batailler, S. M. Brannon, P. E. Teas, B. Gawronski, A signal detection approach to understanding the identification of fake news, Perspectives on Psychological Science 17 (2022) 78–98. [3] D. Tranfield, D. Denyer, P. Smart, Towards a methodology for developing evidence- informed management knowledge by means of systematic review, British journal of management 14 (2003) 207–222. [4] X. Zhou, R. Zafarani, A survey of fake news: Fundamental theories, detection methods, and opportunities, ACM Computing Surveys (CSUR) 53 (2020) 1–40. [5] B. Guo, Y. Ding, Y. Sun, S. Ma, K. Li, Z. Yu, The mass, fake news, and cognition security, Frontiers of Computer Science 15 (2021) 1–13. [6] M. D. Molina, S. S. Sundar, T. Le, D. Lee, “fake news” is not simply false information: A concept explication and taxonomy of online content, American behavioral scientist 65 (2021) 180–212. [7] S. S. Delfino, J. A. S. PINHO NETO, M. R. F. SOUSA, Challenges of the information society in recovering and using information in digital environments, RDBCI: Revista Digital de Biblioteconomia e Ciência da Informação; v. 17 (2019): Publicação Contínua; e019036 24 (2019). [8] P. L. Moravec, A. Kim, A. R. Dennis, Appealing to sense and sensibility: System 1 and system 2 interventions for fake news on social media, Information Systems Research 31 (2020) 987–1006. [9] F. Spezzano, D. Winiecki, How do people decide political news credibility?, in: 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), IEEE, 2020, pp. 602–605. [10] R. Wang, Y. He, J. Xu, H. Zhang, Fake news or bad news? toward an emotion-driven cognitive dissonance model of misinformation diffusion, Asian Journal of Communication 30 (2020) 317–342. [11] C. Chang, Fake news: Audience perceptions and concerted coping strategies, Digital Journalism 9 (2021) 636–659. [12] B. Gawronski, Partisan bias in the identification of fake news, Trends in Cognitive Sciences 25 (2021). [13] R. B. Michael, M. Sanson, Source information affects interpretations of the news across multiple age groups in the united states, Societies 11 (2021) 119. [14] G. Pennycook, J. Binnendyk, C. Newton, D. G. Rand, A practical guide to doing behavioral research on fake news and misinformation, Collabra: Psychology 7 (2021) 25293. [15] S. M. Bowes, A. Tasimi, Clarifying the relations between intellectual humility and pseudo- science beliefs, conspiratorial ideation, and susceptibility to fake news, Journal of Research in Personality 98 (2022) 104220. [16] U. K. Ecker, S. Lewandowsky, J. Cook, P. Schmid, L. K. Fazio, N. Brashier, P. Kendeou, E. K. Vraga, M. A. Amazeen, The psychological drivers of misinformation belief and its resistance to correction, Nature Reviews Psychology 1 (2022) 13–29. [17] G. V. Johar, Untangling the web of misinformation and false beliefs, Journal of Consumer Psychology 32 (2022) 374–383. [18] R. C. Moore, J. T. Hancock, A digital media literacy intervention for older adults improves resilience to fake news, Scientific reports 12 (2022) 6008. [19] M. L. Stanley, P. S. Whitehead, E. J. Marsh, The cognitive processes underlying false beliefs, Journal of Consumer Psychology 32 (2022) 359–369. [20] M. Alassad, M. N. Hussain, N. Agarwal, Comprehensive decomposition optimization method for locating key sets of commenters spreading conspiracy theory in complex social networks, Central European Journal of Operations Research (2022) 1–28. [21] H. Chi, B. Liao, A quantitative argumentation-based automated explainable decision system for fake news detection on social media, Knowledge-Based Systems 242 (2022) 108378. [22] M. Luo, J. T. Hancock, D. M. Markowitz, Credibility perceptions and detection accuracy of fake news headlines on social media: Effects of truth-bias and endorsement cues, Communication Research 49 (2022) 171–195. [23] I. Vamanu, E. Zak, Information source and content: articulating two key concepts for information evaluation, Information and Learning Sciences (2022). [24] L. Ripoll, J. C. Matos, Information reliability: criteria to identify misinformation in the digital environment, Investigación bibliotecológica 34 (2020) 79–101. [25] A. Zrnec, M. Poženel, D. Lavbič, Users’ ability to perceive misinformation: An information quality assessment approach, Information Processing & Management 59 (2022) 102739. [26] B. C. Stahl, On the difference or equality of information, misinformation, and disinforma- tion: A critical research perspective, Informing Science 9 (2006) 83. [27] A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice, Science 211 (1981) 453–458. [28] K. Velupillai, Y. . Kao, Computable and computational complexity theoretic bases for herbert simon’s cognitive behavioral economics, Cognitive Systems Research 29-30 (2014) 40–52. URL: www.scopus.com. [29] G. Lingua, A. De Cesaris, Immersività distratta. la nuova economia dell’attenzione negli ambienti digitali, MeTis 10 (2020) 63–84. [30] M. Angelelli, Tropical limit and a micro-macro correspondence in statistical physics, Journal of Physics A: Mathematical and Theoretical 50 (2017) 415202. [31] K. Daniel, Thinking, fast and slow, New York, US: Farrar, Straus and Giroux, 2017. [32] M. Grobler, R. Gaire, S. Nepal, User, usage and usability: Redefining human centric cyber security, Frontiers in big Data 4 (2021) 583723. [33] F. Tommasi, C. Catalano, I. Taurino, Browser-in-the-middle (bitm) attack, International Journal of Information Security 21 (2022) 179–189. [34] M. T. Baldassarre, V. S. Barletta, D. Caivano, A. Piccinno, M. Scalera, Privacy knowledge base for supporting decision-making in software development, in: Sense, Feel, Design: INTERACT 2021 IFIP TC 13 Workshops, Bari, Italy, August 30–September 3, 2021, Revised Selected Papers, Springer, 2022, pp. 147–157. [35] V. S. Barletta, F. Cassano, A. Marengo, A. Pagano, J. Pange, A. Piccinno, Switching learning methods during the pandemic: A quasi-experimental study on a master course, Applied Sciences 12 (2022) 8438. [36] A. Dzedzickis, A. Kaklauskas, V. Bucinskas, Human emotion recognition: Review of sensors and methods, Sensors 20 (2020) 592. [37] P. Toreini, M. Langner, A. Maedche, Using eye-tracking for visual attention feedback, in: Information Systems and Neuroscience: NeuroIS Retreat 2019, Springer, 2020, pp. 261–270. [38] B. M. Sheliga, L. Riggio, G. Rizzolatti, Orienting of attention and eye movements, Experi- mental brain research 98 (1994) 507–522. [39] M. Borys, M. Plechawska-Wójcik, Eye-tracking metrics in perception and visual attention research, EJMT 3 (2017) 11–23. [40] J. Simko, M. Hanakova, P. Racsko, M. Tomlein, R. Moro, M. Bielikova, Fake news reading on social media: an eye-tracking study, in: Proceedings of the 30th ACM Conference on Hypertext and Social Media, 2019, pp. 221–230. [41] M. Carpita, E. Ciavolino, Mem and sem in the gme framework: Statistical modelling of perception and satisfaction, Procedia Economics and Finance 17 (2014) 20–29. [42] A. Corallo, L. Fortunato, A. Massafra, P. Pasca, M. Angelelli, M. Hobbs, A. D. Al-Nasser, A. I. Al-Omari, E. Ciavolino, Sentiment analysis of expectation and perception of milano expo2015 in twitter data: a generalized cross entropy approach, Soft Computing 24 (2020) 13597–13607. [43] M. Angelelli, B. Konopelchenko, Geometry of basic statistical physics mapping, Journal of Physics A: Mathematical and Theoretical 49 (2016) 385202. [44] M. Angelelli, B. Konopelchenko, Entropy driven transformations of statistical hypersur- faces, Reviews in Mathematical Physics 33 (2021) 2150001. [45] M. T. Baldassarre, V. S. Barletta, D. Caivano, A. Piccinno, A visual tool for supporting decision-making in privacy oriented software development, in: AVI ’20: Proceedings of the International Conference on Advanced Visual Interfaces, ACM, 2020. doi:10.1145/ 3399715.3399818.