=Paper= {{Paper |id=Vol-2524/paper29 |storemode=property |title=A comparison between digital and traditional tools to assess autism: effects on engagement and performance |pdfUrl=https://ceur-ws.org/Vol-2524/paper29.pdf |volume=Vol-2524 |authors=Roberta Simeoli,Miriana Arnucci,Angelo Rega,Davide Marocco |dblpUrl=https://dblp.org/rec/conf/psychobit/SimeoliARM19 }} ==A comparison between digital and traditional tools to assess autism: effects on engagement and performance== https://ceur-ws.org/Vol-2524/paper29.pdf
A comparison between digital and traditional tools to as-
  sess autism: effects on engagement and performance


        Roberta Simeoli1, Miriana Arnucci 1, Angelo Rega2, Davide Marocco1
                        1
                         University of Naples Federico II (ITALY)
                   2
                       Neapolisanit srl-Rehabilitation Center (ITALY)


     Abstract. Most autism assessment tools include behavioral observation ses-
     sions in a natural environment and are mainly based on subjective measures
     of behavior. It is often difficult to clearly observe pure cognitive aspects of
     the disorder. Therefore, in the recent years it is growing the need to make
     the processes of cognitive assessment more motivating and capable to pro-
     vide objective measures of the disorder. This study compares the results of
     performance and engagement of a group of autistic subjects during the exe-
     cution of a classic cognitive test in its traditional and digital version, high-
     lighting the preference for the digital version, especially for subjects with
     mental retardation.


     Keywords: autism; digital assessment tool; engagement.



1   Introduction

Autism is a neurodevelopmental disorder, characterized by organizational difficulties
of thought and of the main functions that regulate human adaptation. It is considered a
functional disorder which involves a permanent general disability within three main
areas: qualitative impairment of social interaction, qualitative impairment of commu-
nication, and the presence of restricted behaviors, interests, and activities, which are
repetitive and stereotyped. One of the main problems encountered during treatment and
assessment of autism is the motivational aspect. Indeed, is often difficult to involve
these subjects in shared activities. The moment of cognitive evaluation could become
very tiring. Moreover, by studying behavior from a merely subjective perspective, re-
ferring to purely psychological theories and hypotheses, many fundamental elements
that are intrinsically present in natural behaviors do get lost. A “macro-behavior”, as
the final result of a cognitive task, can be readily identified through conscious observa-
tion, but these behaviors are made of elements that can go undetected, as the time ef-
fectively spent concentrating on the task. Elements like these can be useful to evaluate
engagement levels [1] but they occur too quickly and elude conscious observation.
However, contemporary technology enables us to capture these elements with extreme
accuracy. Technology allows us to obtain some transparent and objective measures that
can bring studies on autism to a higher and more rigorous standard. The use of digital
assessment tools can have a great impact on the clinical evaluation processes. Propos-
ing simplified and more easily controlled models of reality, [2] they can be beneficial
to obtain results, which can overcome the limits of mere observational evaluation, fo-
cusing more on the construct of interest, and being less dependent on environmental
variability. Research incorporating technology has consistently demonstrated good ef-
fects for the use of computers [3, 4] video [5, 6] mechanical prompting devices [7, 8],
and numerous other technologies with children with autism. The next pressing ques-
tions about technology based interventions focus primarily on whether the interventions
are more efficacious or enjoyable than more traditional, low-tech interventions. The
digitization of a neuropsychological test, in addition to the advantage of providing new
objective and more accurate measurements, has another advantage that is to increase
the subject's motivation to complete the task. Many tests have the main problem of
taking a long time for the administration and the main disadvantage of boring the people
evaluated. These characteristics inevitably have an effect on the engagement and moti-
vation of the subjects to complete the task. Cleary and Zimmerman (2001) [9] defined
motivation as an intention while engagement is an action, motivation is the “will” to
learn and engagement is the “skill” of actually performing the tasks involved in learn-
ing. The definition of engagement as the quantity and quality of mental resources di-
rected to an object of thought strongly implies that the time spent interacting with a
specific object can give us a measure of motivation and commitment dedicated to it.
With the aim to investigate which of the two instruments was the most "motivating" for
the subjects, we have chosen to measure the “latency” times between one response and
the other and the “interaction” times with the test materials. The present study aims to
test the possible advantages of the use of a digital version of the Leiter-3 [10], cognitive
test widely used in the field of autism for non-verbal evaluation of intellectual and cog-
nitive abilities.


2    Methods

Participants were 30 autistic children aged by 5 to 9, with an average age of 7 years and
SD of +/-1,02. The group were composed by 5 females and 25 males. All subjects were
diagnosed with autism spectrum disorder by qualified doctors and professionals in the
sector who have no affiliation with our laboratory or our research. The ASD subjects
follow psychomotor and speech therapy treatment at the Neapolisanit S.R.L. center.
The subjects underwent two experimental phases, executed two weeks apart. During
the experimental phases the subjects were required to carry out the tasks of the Leiter-
3 test in its traditional version and, two weeks later, in a digital version. In order to
avoid a possible learning effect, the tests were balanced, so half sample performed the
task in its digital version in the first administration, vice versa the second half of the
sample. During the administration of the test in its digital version the participants had
to carry out the cognitive tasks of the Leiter-3 presented on a tablet screen. A Huawei
MediaPad T3 10 tablet has been used for the task. The software was developed in Unity
and consists in the presentation of a sequence of scenes taken by the Leiter-3. The user
had to select and drag the images to the bottom of the screen and position them on the
corresponding image at the top, depending on the task delivery, as shown in Figure 1.
Each scene is composed of a maximum of 6 images at the bottom of the screen, which
can be moved from one point to another on the screen by dragging, and a maximum of
7 fixed images placed at the top of the screen. We call the images at the top "placehold-
ers" and these are programmed to capture the images moved by the subjects, when they
are dropped above them. The placeholders for each task range from a minimum of 1 to
a maximum of 7 and include distractor images. All images reproduce geometric figures
of different shapes and colors. The task is characterized by a growing difficulty given
by the progressive increment in the number of distracting stimuli and the distinctive
details of the images. During the performance, the software recorded the presence of
the stimuli and, simultaneously, the movement coordinates resulting from the dragging
movement of the images from one point to another of the screen. All data were recorded
in real time, each data was associated with its execution time in milliseconds. During
the administration of the test in its traditional version, the subjects were required to
perform the test according to the standardized procedure of Leiter-3. To measure inter-
action and latency times, a second experimenter was required to time each user action,
using a software that re-proposed the test scenarios. On the tablet screen all the images
with which the subject would interact in reality were depicted and a timer was activated
by clicking on the selected images. For each item the timer started when all test mate-
rials had been correctly placed by the first experimenter and the child was actually ready
to begin. The “interaction” time were defined as time starting from the beginning of the
manipulation of the test cards until the moment in which the user left the card in the
specific box. Latency times, on the other hand, were defined starting from the end of
each manipulation up to the beginning of the next one.




Figure 1. An example of scene reproducing a Leiter-3 classification task.




3    Results

Repeated measures ANOVA was used to compare performance scores, latency times
and interaction times. The performance scores for the two tests were slightly higher
during the second administration, with an average score of 86,23 for the digital test and
82,03 for the traditional test, and a p value of 0,64. The non-centrality parameter within
the subjects was 3,710. Latency time differences were significantly higher during the
traditional test (p value ≤ 0,001), with an average of 2,65 sec and SD 0,56 in the digital
test, while the average test was 3,63 sec. SD 1,25. The interaction times were instead
slightly higher, but not significantly (p value 0,57), during the administration of the
digital test with an average of 2,9 sec, SD 1,37, against 2,6 sec SD 2,15. A correlation
analysis was also conducted between the IQ scores and the differences in scores ob-
tained for the two tests for each subject. The difference between the scores obtained in
the digital test minus the scores obtained at the traditional test gave us an index of "best
performance" between the tests. Positive values indicated better performance for the
digital test, vice versa negative values. This analysis showed the presence of a signifi-
cative negative correlation between IQ values and the "best performance" index. A sec-
ond correlation analysis was carried out between the IQ scores and the intra-subject
differences between the latency times recorded during the execution of the test in its
different proposed versions. Positive values indicated greater latency during the digital
test, vice versa negative values. The results indicated the presence of a negative but not
significative correlation (p value 0,58) between the IQ variables and latency times.



4 Discussion

The results obtained have highlighted some interesting aspects. Although there were no
major differences in terms of performance, the negative correlation between IQ and
"best performance" index in the digital test suggests that there may be a tendency to
prefer interaction with digital tools rather than traditional ones, especially for the sub-
jects with more serious mental retardation, supported by a greater level of engagement.
Defining the engagement as quantity and quality of time that is dedicated to the inter-
action with an object, we went to measure the “latency” times and the “interaction”
times for the two tests, the results showed longer latency times during the traditional
test and a tendency to spend more time interacting with the object during the digital
test. In fact, during the traditional test administration, children lost concentration more
easily between one answer and another, while during digital administration they pre-
ferred to start interacting with the tablet even before to have identified the correct an-
swer. In any case, this behavior has not significantly affected performance, but has af-
fected the total time taken by the subjects to carry out the tasks. Instead, the total ad-
ministration time is greatly reduced during the digital version, because of the time re-
quired for the preparation of all materials in the traditional version. Furthermore, if
latency is really an index of engagement and the performance results of subjects with
lower IQs are really explained by higher levels of engagement it will also be true that
there is a correlation between these two factors, our results of the have shown that there
is a correlation that goes in the direction we proposed.



5 Conclusion

The present study had the aim of verifying the advantages of using a tool for assessing
cognitive abilities in a digital version on a sample of subjects with autism. The results
confirmed the hypothesis that the use of digital tools for the evaluation of autism can
have a positive impact on the rapidity of administration and also on motivation and
engagement, especially in subjects with more severe mental retardation, which are also
the most difficult to evaluate in everyday clinical practice. An interesting future purpose
could be to identify new objective measures that can be considered an index of engage-
ment, for instance inserting an eye tracking device, that can reveal eye movement and
focus attention which can be all considered constituent elements of the engagement
construct [1].




References


    1    Brian W. Miller (2015) Using Reading Times and Eye-Movements to Measure
         Cognitive Engagement, Educational Psychologist, 50:1, 31-42, DOI:
         10.1080/00461520.2015.1004068
    2    Ponticorvo, M., Di Fuccio, R., Ferrara, F., Rega, A., Miglino, Multisensory
         educational materials: Five senses to learn(2019) Advances in Intelligent Sys-
         tems and Computing, 804, pp. 45-52.
3   Bernard-Opitz, V., Sriram, N., & Nakhoda-Sapuan, S. (2001). Enhancing so-
    cial problem solving in children with autism and normal children through
    computer-assisted instruction. Journal of Autism and Developmental Disor-
    ders, 31, 377-384.
4   Silver, M., & Oakes, P. (2001). Evaluation of a new computer intervention to
    teach people withautism or asperger syndrome to recognize and predict emo-
    tions in others. Autism, 5, 229-316.
5   Charlop, M. H., & Milstein, J. P. (1989). Teaching autistic children conversa-
    tional speech using video modeling. Journal of Applied Behavior Analysis,
    22, 275-285.
6   Shipley-Benamou, R., Lutzker, J. R., & Taubman, M. (2002). Teaching daily
    living skills to children with autism through instructional video modeling.
    Journal of Positive Behavior Interventions, 4, 165-175.
7   Taylor, B. A., & Levin, L. (1998). Teaching a student with autism to make
    verbal initiations: Effects of a tactile prompt. Journal of Applied Behavior
    Analysis, 31, 651-654.
8   Shabani, D. B., Katz, R. C., Wilder, D. A., Beauchamp, K., Taylor, C. R., &
    Fischer, K. J. (2002). Increasing social initiations in children with autism: Ef-
    fects of a tactile prompt. Journal of Applied Behavior Analysis, 35, 79-83.
9   Cleary, T. J., & Zimmerman, B. J. (2001). Self-regulation differences during
    athletic practice by experts, non-experts, and novices. Journal of Applied
    Sport Psychology, 13, 185–206.
10 Roid GH, Miller LI, Pomplun M. Leiter International Performance Scale-
    Third Edition (Leiter-3) (2013). Wood Dale: Stoelting Co.