<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neuro-Conceptualization: Visual Conceptual Modeling meets Neuroscience</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>John Krogstie</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kshitij Sharma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Norwegian University of Science and Technology, NTNU</institution>
          ,
          <addr-line>Trondheim</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>A lot of research has been done on the comprehension and development of conceptual models. In other related areas such as linguistics and software engineering one has taken techniques from neuroscience into use, to study the biological and neurological processes when working with textual knowledge representations. This has only to a limited extent been the case when it comes to visual conceptual models so far. We will in this paper present ongoing research on the use of techniques from neuroscience to investigate how we develop and comprehend visual conceptual models. Traditionally, neuroscience techniques have been depending on EEG or even large MR-machines for techniques such as fMRI, and we outline planned work, also for being able to study modeling tasks closer to how they are actually performed by using multimodal data analysis.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Novel directions Talk</kwd>
        <kwd>Conceptual process modeling</kwd>
        <kwd>NeuroIS</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>NeuroIS is a research field in which neuroscience theories and tools are used to better understand
information systems phenomena. Existing research areas in NeuroIS is summarized in [14], where it
appears that the main focus is on the use of information systems. Lately also software development
tasks such as programming has been heavily studied in literature [12], whereas other tasks often linked
to IS-development such as visual conceptual modeling has so far to a very limited degree been studied
using techniques from neuroscience beyond the use of eye-tracking techniques [5].</p>
      <p>Visual conceptual modeling is a central activity in information systems analysis and design. It
involves the construction of abstract models that capture the structure, behavior, and relationships of
real-world entities or concepts, and the two-dimensional layout allows to play with both the primary
and secondary notation [7] to convey meaning. Integrating neuroscience techniques into conceptual
modeling open up new opportunities for understanding how the human brain processes and represents
complex information, which can, in turn, inform and enhance the development of more effective
modeling approaches when looked upon together with insights form fields such as conceptual modeling,
linguistics, and cognitive psychology.</p>
      <p>The use of neuroscience in conceptual modeling primarily focuses on understanding the neural
mechanisms underlying concept formation, representation, comprehension and manipulation. By
leveraging advanced neuroimaging techniques such as functional magnetic resonance imaging (fMRI2),
functional near-infrared spectroscopy (fNIRS3), and electroencephalography (EEG4), researchers can
examine brain activation patterns and connectivity while participants engage in tasks that require
conceptual reasoning or problem-solving. A challenge with some of the more advanced techniques from
neuroscience such as fMRI is that the accuracy of results comes at a cost, in particular on the ecological
validity of the trial situation and the cost-benefit of the technique used, thus we are aiming for using
less intrusive techniques in concert in combination with multimodal data analytics. A comprehension
description of current techniques in neuroscience as applied in informatics can be found in [12].</p>
      <p>Moreover, the study of individual differences in conceptual processing and the neural basis of
expertise in specific domains can provide valuable information on the factors that contribute to the
development of expert-level conceptual reasoning and problem-solving abilities.</p>
      <p>The use of neuroscience techniques in connection to conceptual modeling has the potential to
significantly advance our understanding of the neural basis of complex information comprehension,
processing and representation. By combining insights from both fields, researchers can develop more
effective conceptual modelling approaches that better align with the inherent capabilities and
constraints of the human brain.</p>
      <p>As mentioned, techniques such as eye-tracking is used quite a bit for model comprehension and
modeling process analysis [1], but papers in the area primarily mention a more extensive treatment with
neuroscience techniques to be tried as the next step [15], although other techniques are gradually being
taken into use [13]. In this novel direction talk, we will present preliminary plans for experiments on
how to use input from a number of different sensors to do multimodal data analysis for proving better
understanding on how the brain is doing different modeling tasks.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Current research plan</title>
      <p>We plan a number of experiments, starting with studying simple model comprehension tasks, being
extended to more complex tasks in a setting as close to a normal modeling situation as sensors improves.
A normal modeling situation will compared to laying still in an MR-machine involve movement that
typically introduce noise which it is hard to deal with in with some of the current sensors, but we see
this has improved over the last years and is expected to be further improved. As a start we envisage the
following experiments: (partly with inspiration from what that is recently done in the field of code
comprehension):
1. Investigate the usage of the brain when working with conceptual/visual models:
RQ 1: Which brain regions are activated during model comprehension (similar to what is done in
[9], where one looked at the brain regions active when performing computer program comprehension
tasks). Experimental task: a model (in a modeling language known to the participant) is presented to
the user, which is to use this for answering presented comprehension questions. Will start with a model
in one diagram (i.e., the whole model is visible, no need for scrolling/navigating in a hierarchy etc. to
avoid to much bodily movement by the participant. The models to use are similar to those used in
standard process model comprehension tasks [6].</p>
      <p>2. Investigate the usage of the brain when working with conceptual/visual models as compared to
how it operates when using a text expressing the same information (with possibly some parallel to what
is done in [2] where differences in working with visual and textual programming languages were
investigated)</p>
      <p>Main hypothesis: Different parts of the brain is used more intensively when working with visual
knowledge representation than when working with textual knowledge representations. Experimental
task: Have two domains, both presented as a model and a text, and comprehension questions for both
domains. Have a Latin-square set-up to give participants different settings, e.g., one group first see a
model of domain A, and then a text of domain B etc. In addition to study the parts of the brain used,
measure cognitive load and possibly also other characteristics (see below).</p>
      <p>3: Investigate the usage of the brain when using different modeling languages (BPMN vs. UML AD
for process modeling for instance). Have a similar latin-square set-up based on models in both languages
representing two different domains.</p>
      <p>4. How do layout and other aspects of secondary notation influence model comprehension (cf. [9],
where they looked at how layout and beacons in source code influence program comprehension). Need
to be detailed based on issues found in cognitive psychology, listed as empirical model quality issue in
the SEQUAL framework on model quality [4].</p>
      <p>5. How can detected information on e.g., cognitive load be used to provide feedback tools to support
the modeler.</p>
      <p>A more detailed set-up currently done in connection to the first two tasks also extending into
affective and behavioral aspects are presented below:</p>
      <p>RQ1: What are the differences in affective, behavioral and cognitive processes across different
levels of model comprehension?
• Sub-RQ: what are the major brain regions responsible for visual model comprehension?
• Sub-RQ: how does the cognitive load evolve during the comprehension process?
• Sub-RQ: what are the roles of stress and physiological arousal leading to certain comprehension
performance level?</p>
      <p>Prediction question: how accurately can we predict the comprehension level of the modelers using
the affective, behavioral, and cognitive measurements using deep learning networks? What are the
various dimensions of explainability in such a predictive model?</p>
      <p>The experiment follows a time series repeated measure design as follows, with:</p>
      <p>Where the models are represented both in text and as visual business process models, and the
participants are divided in a Latin-Square fashion. NASA TLX is a self-assessment of task load [11].
We aim to have around 60-70 participants in total. Participants will be recruited from NTNU student
and employee population. NTNU has more than 40000 students across all academic fields. The way the
study is to take care of protecting the participants privacy has been reported to and accepted by the
national authorities in this matter (NSD-approval).
5 AOI = Areas of interest that are defined by the researcher
6 Focal: short saccades and long fixations
7 Ambient = long saccades and short fixations


</p>
      <p>Wristband measures are mostly used in prediction, so we lack the measures that are directly
interpreted: HR = heart rate, EDA = electrodermal activation (skin conductance), BVP = blood volume
pressure, TEMP = skin temperature




</p>
      <p>EDA peak height / EDA peak rate / EDA slope: Cognitive load.</p>
      <p>BVP power spectrum low/high ratio / BVP amplitude / # EDA responses detected/ EDA
mean/ EDA rising time / TEMP slope: Stress.</p>
      <p>TEMP (mean, sd, kurtosis, skewness)/ EDA peaks / HR variability (mean, sd, kurtosis,
skewness): Emotional stress.</p>
      <p>EDA change detection measures: Acute stress cycle (normal, aroused,
stressed, relaxed).</p>
      <p>HR recovery rate changes (duration and counts): Chronic stress.</p>
      <p>We can additionally compute the action units (AUs) from the faces of the participants capture using
cameras. Once we have these AUs then we can compute various emotions as the combination of these
AUs, such as Happiness, Sadness, Surprise, Fear, Anger, Disgust, and Contempt. Second, we can
compute the emotional profile (entropy of AUs, stability of emotions, emotional similarity between the
peers) of the participants similar to [12]. We have not space here to go in detail on the machine learning
interpretation of data, but this will be presented at the conference. We also note that using a large
number of inputs in parallel brings additional challenges in synchronizing the output of the different
sensors.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Concluding remarks</title>
      <p>Whereas neuro-science techniques are being used for a number of tasks connected to IS usage and
programming, the application in connection to the use of visual conceptual models has so far been
limited. We have in this novel direction talk given an overview of a multi-modal approach for capturing
neuro-scientific data in connection to conceptual modeling, which we hope will bring up ideas in the
conference for how to bring this area of research forward.</p>
    </sec>
    <sec id="sec-4">
      <title>4. References</title>
      <p>[1] R.Batista Duarte, D. Silva da Silveira, V. de Albuquerque Brito, C. S. Lopes, A systematic
literature review on the usage of eye-tracking in understanding process models, Business Process
Management Journal, Vol. 27 No. 1, (2021) 346-367.
[2] S. Doukakis, Exploring brain activity and transforming knowledge in visual and textual
programming using neuroeducation approaches AIMS Neurosci., 6, pp. (2019)
[3] M. Giannakos, D. Spikol, D. Di Mitri, K. Sharma, X. Ochoa, R. Hammad (Eds.) The Multimodal</p>
      <p>Learning Analytics Handbook, Springer, 2022
[4] J. Krogstie, Quality in Business Process Modeling, Springer. 2016
[5] J. Pinggera, M. Neurauter, S. Zugal, M. Martini, M. Furtner, P. Sachse, D. Schnitzer, Fixation
patterns during process model creation: initial steps toward neuro-adaptive process modeling
environments. In: Proceedings of the 49th Hawaii International Conference on System Sciences
(HICSS), 1016, pp. 600–609.
[6] H. Ritchi, M. Jans, J. Mendling, H. A. Reijers, The Influence of Business Process Representation
on Performance of Different Task Types. Journal of Information Systems 1 March; 34 (1) (2020)
167–194
[7] M. Schrepfer, J. Wolf, J. Mendling, H. A. Reijers, The Impact of Secondary Notation on Process
Model Understanding. In: Persson, A., Stirna, J. (eds) The Practice of Enterprise Modeling. PoEM
2009. Lecture Notes in Business Information Processing, vol 39. Springer, Berlin, Heidelberg.
2009.
[8] K. Sharma, S. Papavlasopoulou, M. Giannakos, Joint Emotional State of Children and Perceived
Collaborative Experience in Coding Activities. In Proceedings of the 18th ACM International
Conference on Interaction Design and Children. ACM 2019.
[9] J. Siegmund, C. Kästner, S. Apel, C. Parnin, A. Bethmann, T. Leich, G. Saake, A. Brechmann,
Understanding understanding source code with functional magnetic resonance imaging. In:
Proceedings of the 36th International Conference on Software Engineering - ICSE 2014. pp. 378–
389
[10] J. Siegmund, N. Peitek, C. Parnin, S. Apel, J. Hofmeister, C. Kästner, A. Begel, A. Bethmann, A.</p>
      <p>Brechmann, Measuring neural efficiency of program comprehension. In: Proceedings of the 2017
11th Joint Meeting on Foundations of Software Engineering - ESEC/FSE 2017. , 2017,pp. 140–
150
[11] Task Load Index https://humansystems.arc.nasa.gov/groups/TLX/ Last accessed 16/3-2023
[12] B. Weber, T. Fischer, R. Riedl, Brain and autonomic nervous system activity measurement in
software engineering: A systematic literature review, Journal of Systems and Software, Volume
178 (2021)
[13] M. Winter, H. Neumann, R. Pryss, T. Probst, M. Reichert, Defining gaze patterns for process
model literacy – Exploring visual routines in process models with diverse mappings Expert
Systems with Applications, Volume 213, (2023)
[14] J. Xiong, M. Zuo, What does existing NeuroIS research focus on? Information Systems, Volume
89 (2020)
[15] M. Zimoch, T. Mohring, R. Pryss, T. Probst, W. Schlee, M. Reichert, Using Insights from
Cognitive Neuroscience to Investigate the Effects of Event-Driven Process Chains on Process
Model Comprehension. In: Teniente, E., Weidlich, M. (eds) Business Process Management
Workshops. BPM 2017. Lecture Notes in Business Information Processing, vol 308. Springer,
2017</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>