<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Andrea Janes, Alberto Sillitti, and Gi-
ancarlo Succi. E ective dashboard de-
sign. Cutter IT Journal</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Envisioning a Computational Thinking Assessment Tool</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ilenia Fronza</string-name>
          <email>ilenia.fronza@unibz.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claus Pahl</string-name>
          <email>claus.pahl@unibz.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Free University of Bolzano</institution>
          ,
          <addr-line>Piazza Domenicani 3, 39100 Bolzano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>26</volume>
      <issue>1</issue>
      <fpage>03</fpage>
      <lpage>09</lpage>
      <abstract>
        <p>Recent work on Computational Thinking (CT) has focused on proposing new curricula but in many cases the assessment phase has been overlooked. The issue is critical because appropriate assessment is needed to facilitate the incorporation of CT in the curriculum. What is now clear from the existing literature is that there is a need to build on top of the existing multiple forms of assessments, in order to integrate multiple approaches and reach a comprehensive assessment of CT learning. In this paper, we envision a system that integrates di erent types of assessments while providing an intuitive interface in order to allow teachers to see and supervise the overview of the learning process, with the possibility to assess individually the student's learning. To assess the suitability of our idea, we describe the Proof of Concept of a mobile application to assist CT assessment, and we discuss the challenges that need to be solved to create such an application.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Computational Thinking (CT) is considered as a key
set of skills that must be learned by today's
generation of learners, both in the context of STEM and
other subjects [Gro17]. Therefore, CT has caught the
attention of a broad academic community. Many
studies have rst tried to capture the essence of CT and to
create an agreed de nition, as CT was rather a broad
term [BR12, Win14]. Then, researchers and
educators have focused on designing new activities to foster
CT across the curriculum. Unfortunately, in many
cases more weight has been given to the development
of methods to teach CT than on proposing how
learning will be assessed and evaluated. This is a relevant
issue, because appropriate assessment instruments are
needed to incorporate CT in the curriculum [HL15].</p>
      <p>Most of the existing assessment procedures focus
on code analysis. This approach might be convenient
in a professional environment [CF15], but it neglects
CT broader aims [Cro14, Net13]. Other procedures,
while valuable for research and for providing a view
of student's learning, are subjective, time-consuming,
and not easily usable regularly in classrooms [Gro17].</p>
      <p>Nowadays, many researchers agree that using only
one type of assessment can lead to misunderstand
the development of CT skills [RGMLR17].
Therefore, in order to reach a comprehensive assessment of
CT learning, a \system of assessments" needs to be
adopted, i.e., a combination of omplementary
assessments tools. To this end, S. Grover [Gro15] suggests
to consider multiple measures that are complementary,
encourage and re ect deeper learning, and contribute
to a comprehensive picture of students' learning.</p>
      <p>At this point it should be considered that the use of
a system of assessments could make the teacher's work
even more complicated, which could result in limiting
considerably the adoption of this type of systems in the
classroom. In fact, the teacher would need to regularly
apply di erent types of assessment (e.g., code analysis,
interview) and then integrate the results to obtain an
overall assessment.</p>
      <p>To address this issue, we envision a system that
integrates di erent types of assessments while providing
an intuitive interface in order to allow teachers to see
and supervise the overview of the students' learning
process, with the possibility to assess individual
learning. To assess the suitability of our idea, we describe
the Proof of Concept (PoC) of a mobile application,
and then we discuss possible challenges that need to
be solved in order to create such application.</p>
      <p>Section 2 reports the state of the art of existing
CT systems of assessments. Section 3 describes the
proposed Proof of Concept of a mobile application to
assist CT assessment, and Section 4 discusses the
challenges that need to be faced in order to create such
application. Section 5 draws conclusions from this work,
also proposing possible directions for future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>State of The Art</title>
      <p>Recent work on Computational Thinking (CT) has
focused on environments and tools that foster CT
[GP13, RWI10], new curricula for CT (e.g., [SFH+12,
FEIJ+14]), also using CT as a medium for teaching
other subjects (e.g., [FEIC15, FZ15, Edw11]). In this
scenario, a major gap has emerged in research on CT
assessment. Indeed, in many cases more weight is
given to the development of methods to teach and
foster CT without thinking about how those methods will
be assessed and evaluated. This issue is critical,
because assessment not only determines whether or not
educational goals are being met, it also drives the
design of the curriculum [HL15]. Grover and Pea make
the gravity of the lack of CT assessment clear:
\Without attention to assessment, CT can have little hope
for making its way successfully into any K-12
curriculum" [GP13].</p>
      <p>Given the absolute need for an assessment
methodology, many e orts in the last years aimed speci cally
at tackling the issue of CT assessment. An overview
of the proposed approaches, divided according to their
perspective (e.g., summative assessment,
perceptionsattitudes scales, etc.), can be found in the recent work
of Roman-Gonzalez et al. [RGMLR17].</p>
      <p>What is clear from the existing literature, is the
need to build on top of the multiple forms of
assessments that have been proposed so far, in order to reach
a comprehensive assessment of CT learning. Research
is now moving in this direction, and some examples of
\systems of assessments" have been proposed [Gro15].</p>
      <p>For example, Brennan and Resnick [BR12]
described three approaches for assessing the development
of CT concepts, practices, and perspectives. Fronza et
al. [FIC17] developed a framework to assess the
development of computational concepts and practices.
Roman-Gonzalez et al. [RGMLR17] aimed at
studying the convergent validity of their CT
summativeaptitudinal assessment test with respect to other
assessment tools. S. Grover [Gro17] described the
multiple forms of assessments designed and empirically
studied in a middle school introductory computing
curriculum.</p>
      <p>If we want these systems of assessments to make it
to the classroom, in addition to building these systems,
another goal must be to re ect on how to facilitate
their adoption in the classroom, without leaving on
the teacher's shoulders the task of manually
integrating di erent types of assessments to achieve an overall
assessment.</p>
      <p>Despite this need, few if any tools exist that enable
real-time, overall, formative assessment of CT. As we
describe in the next section, we aim to ll this research
gap envisioning an assessment tool that assists CT
assessment, by integrating di erent types of assessments
and providing an intuitive interface.
3</p>
      <p>CT Assessment Tool: A Proof Of
Concept
We envision a mobile application that supports CT
assessment as follows. It integrates di erent types of
assessments, also providing an intuitive interface in
order to allow teachers to see and supervise the overview
of the students' learning process, with the possibility
to assess individually the students' learning. In this
section, we describe a Proof of Concept of this
application. First, we describe a possible approach to de ne
the assessment framework that the application should
implement. Then, we detail possible design choices
that would ease the assessment process. Finally, we
report on architectural considerations.
3.1</p>
      <sec id="sec-2-1">
        <title>Assessment Framework</title>
        <p>Following the guidelines of the existing literature, our
assessment framework, being a systems of assessments,
should assess di erent skills, such as: CT concepts,
practices, and perspectives [BR12], cognitive skills,
and also social and relational skills [Wor94]. The
\Goal-Question-Metrics" (GQM) measurement model
[BCR94] could be adopted in order to provide an
effective view of student's learning. In fact, the GQM
approach helps to clearly specify the object of study,
the aspect of study, the purpose of the assessment and
the environment in which the data is collected.</p>
        <p>The GQM approach foresees the de nition of
measurement goals, questions, and metrics. In our case,
goals specify the assessment needs (i.e., CT skills) in
a formal way; questions de ne information gaps that
need to be lled to understand whether a measurement
goal can be achieved or not; and measurements help
to answer the measurement questions.</p>
        <p>An example of such a model is depicted in Figure 1:
the problem of assessing the learning of computational
concepts is modeled as a GQM goal. The goal is then
assessed using measurement questions: in the example
of Figure 1, these questions are how well the student
understands conditionals, sequences, and loops. Each
question is then answered using one or more
measurements, for example, one measurement to understand
whether the student understands the correct usage of
a loop is the presence of a \for" construct in the source
code provided by the student, when this is required by
the given problem.
3.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Design of the Visual Support</title>
        <p>A dashboard would be an e ective means to visually
display the outcome of the assessment, which is
modelled in form of GQM models. In dashboards, in fact,
pre-attentive properties (e.g., color, shape, location)
are used to maximize the understanding of the
displayed information and to guide attention [Few13].</p>
        <p>Each tile in the dashboard shall represent a
speci c measurement. Suppose that, for example, the tile
in Figure 2 shows the number of \for" statements in
the code. In this case, there are 12 loops in the code
and this number is higher respect to the previous
measurement (as shown by the upward arrow). The tile
is colored in green to show that a su cient level has
been reached, therefore the teacher does not have to
focus on this tile anymore. If, instead, no loops in
the code are found, then the tile shall be red. The
teacher, in this case, needs to nd out if there was a
need of a loop in the project and the students did not
use the appropriate block but, for example, just
repeated the same command many times. In this case,
the computational concept \loops" would not be
assessed positively [FIC17].</p>
        <p>Figure 3 shows an example in which three skills (i.e.,
goals) are depicted with associated questions and
metrics. For example, metric B is colored in red and
therefore requires the attention of the teacher. Metric E
is colored in yellow, therefore depicts a warning level
which should be kept under observation. Tiles that
are colored in green depict a satisfactory ful llment of
the metric.</p>
        <p>The same rule applies to the each skill tile. When
all the corresponding metrics are colored in green, also
the skill tile is colored with the same color in order to
show immediately to the teacher that there is no need
of attention for that skill (for example, \CT skill 2" in
Figure 3). The skill tile is red, instead, when
immediate intervention is needed because all the metrics are
colored in red (for example, \CT skill 3"). A yellow
skill tile (for example, \CT skill 1") indicates that the
corresponding skill should be kept under observation.</p>
        <p>The existing literature recommends to use an
assessment framework in several points of the learning
process [BR12]. The proposed dashboard would help
the teacher to focus on critical aspects: she/he will
ignore the green tiles and focus on the orange and red
ones, which means she/he would assess and provide
feedback on those aspects that have not been learned
yet or require a more detailed explanation. In the
example shown in Figure 3, it would be immediately clear
to the teacher that \CT skill 2" has been achieved.</p>
        <p>The rules de ning whether a tile becomes green,
yellow, or red embed the knowledge extracted during
the GQM de nition, i.e., the conditions under which
teachers can assess whether a skill has been acquired
or not.
3.3</p>
        <p>Architectural Considerations for the
Assessment Tool
The architectural considerations for the CT
measurement tool we consider are the following:</p>
        <p>Functionality. The assessment tool shall have
the necessary features to receive the input of key
concept or areas from each student, as well as
the comments and preliminary assessment of the
teacher. It shall allow the track, follow up,
comparison, evaluation and projection of each
considered skill. Due to the scope of the tool, the
modules and features pertaining data reports will
be of particular importance through the
development and acceptance of this tool.</p>
        <p>Maintainability. The CT model, formulated as
a GQM measurement model, is of crucial
importance for the interpretation of the data.
Therefore, it has to be implemented in a modular,
congurable way for two reasons: it has to possible to
con gure an existing model to accomplish small to
medium changes but we want also that it is
possible to replace the entire model with a new one,
if new considerations arise. Moreover, changes
could be necessary to allow using the assessment
tool at di erent education levels (such as K-12
or university). In consequence, the assessment
framework has to be implemented as a pluggable,
con gurable component of the nal tool, which
interacts with the designed dashboard through clear
interfaces.</p>
        <p>Usability. A desirable characteristic for the
assessment tool is to be designed to be used on a
tablet because it would allow teachers to move
easily in the room (from one student's
workstation to another one).</p>
        <p>The next section discusses possible challenges that
need to be faced in order to create our envisioned CT
assessment tool.
4</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Discussion</title>
      <p>Before starting the development of a real application
from the PoC that we have envisioned in Section 3,
a number of challenges need to be solved. In the
following, we discuss the main challenges that we have
already identi ed.</p>
      <p>De nition of measurements and thresholds.
The de nition of the CT assessment model (Section
3.1) is a crucial step towards the implementation of
our PoC, and requires the de niton of measurements
and thresholds. This requires extensive e ort and
empirical research, by iterations, also involving teachers
to collect their feedback.</p>
      <p>Customization. To facilitate the adoption of this
tool in the classroom, attention needs to be taken to
make it customizable. For example, a teacher may
decide to carry out activities to improve only a speci c
subset of skills; in this case, he/she should be able
to select (and integrate) only a part of the available
measurements in the tool.</p>
      <p>Code analysis. The tool shall be adaptable to
each level of education. From the code analysis
perspective, this means for example foreseeing the
possibility of analyzing code written in di erent
programming languages (e.g., block- or text-based).</p>
      <p>Social skills. One of the most critical aspects will
probably be assessing social and relational skills.
Indeed, a comprehensive method has not been proposed
so far, and the current approach is to evaluate
lifeskills trainings through observations, questionnaires
[AVM15], or self-report [AM15]. The tool shall
provide an opportunity to collect and analyse data from
these sources, at di erent points in the learning
process.</p>
      <p>Iterative development and empirical
research. The tool shall be developed in an
iterative fashion, performing empirical research with
students and teachers, and adapting it to the feedback
obtained. A research method that supports such
approach is \design research". By its nature, design
research is relevant for educational practice as it aims
to develop research-based solutions for complex
problems in educational practice or to develop or
validate theories about processes of learning and teaching
[VdAGMN06]. Design research incorporates
systematic educational design processes and, like all
systematic educational and instructional design processes, it
is cyclical in character: analysis, design, evaluation
and revision activities are iterated until an
appropriate balance between ideals (\the intended") and
realization has been achieved [VdAGMN06].</p>
      <p>Clearly, this type of method requires numerous
classroom experiments to be carried out, and
replicated, to draw relevant information from them.
In this paper, we described our vision of a mobile
application to assist CT assessment, and then we discussed
possible challenges that need to be solved in order to
create such application. The results of this analysis
allow us to outline a research agenda that we believe will
help to implement the idea we have described. An
interdisciplinary approach is needed, which draws from
software engineering, didactics, and cognitive
psychology. Research e orts should focus primarily on: i)
de nition of the assessment framework (using a
threestep approach: what? so what? now what?); ii)
implementation of a software product that will sustain
operationally the assessment framework; iii) data
mining and e ective visualization; iv) empirical research
in classroom.
[AM15]
[AVM15]</p>
      <sec id="sec-3-1">
        <title>Hayley Anthony and Louise A McLean. Promoting mental health at school: short-term e ectiveness of a popular school-based resiliency programme.</title>
        <p>Advances in School Mental Health
Promotion, 8(4):199{215, 2015.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Narges Adibsereshki, Abas Mahvashe</title>
        <p>Vernosfaderani, and Guita Movallali.
The e ectiveness of life skills training
on enhancing the social skills of
chil[BCR94]
[BR12]
[CF15]
[Cro14]
[Edw11]
[FEIC15]
dren with hearing impairments in
inclusive schools. Childhood Education,
91(6):469{476, 2015.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Victor R. Basili, Gianluigi Caldiera,</title>
        <p>and H. Dieter Rombach. The goal
question metric approach. In Encyclopedia
of Software Engineering. Wiley, 1994.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Karen Brennan and Mitchel Resnick.</title>
        <p>New frameworks for studying and
assessing the development of
computational thinking. In 2012 Annual
Meeting of the American Educational
Research Association (AERA'12),
Vancouver, Canada, pages 1{25,
Vancouver, Canada, 2012. AERA.</p>
      </sec>
      <sec id="sec-3-5">
        <title>L. Corral and I. Fronza. Better code for better apps: A study on source code quality and market success of android applications. pages 22{32, 2015.</title>
      </sec>
      <sec id="sec-3-6">
        <title>Dan Crow. Why every child should learn to code, feb 2014.</title>
      </sec>
      <sec id="sec-3-7">
        <title>Michael Edwards. Algorithmic com</title>
        <p>position: computational thinking in
music. Communications of the ACM,
54(7):58{67, 2011.</p>
      </sec>
      <sec id="sec-3-8">
        <title>Ilenia Fronza, Nabil El Ioini, and Luis</title>
        <p>Corral. Students want to create apps:
[FEIJ+14]</p>
      </sec>
      <sec id="sec-3-9">
        <title>Leveraging computational thinking to teach mobile software development. In</title>
        <p>Proceedings of the 16th Annual
Conference on Information Technology
Education, SIGITE '15, pages 21{26, New
York, NY, USA, 2015. ACM.</p>
      </sec>
      <sec id="sec-3-10">
        <title>Ilenia Fronza, Nabil El Ioini, Andrea</title>
        <p>Janes, Alberto Sillitti, Giancarlo Succi,
and Luis Corral. If i had to vote on
this laboratory, i would give nine:
Introduction on computational thinking
in the lower secondary school: Results
of the experience. Mondo Digitale,
13(51):757{765, 2014.</p>
        <p>Stephen Few. Information Dashboard
Design: Displaying data for at-a-glance
monitoring, volume 5. Analytics Press
Burlingame, CA, 2013.</p>
      </sec>
      <sec id="sec-3-11">
        <title>Ilenia Fronza, Nabil El Ioini, and Luis</title>
        <p>Corral. Teaching computational
thinking using agile software engineering
methods: A framework for middle
schools. ACM Transactions on
Computing Education (TOCE), 17(4):19,
2017.</p>
      </sec>
      <sec id="sec-3-12">
        <title>I. Fronza and P. Zanon. Introduction of</title>
        <p>computational thinking in a hotel
management school [introduzione del
computational thinking in un istituto
alberghiero]. Mondo Digitale, 14(58):28{
34, 2015.</p>
      </sec>
      <sec id="sec-3-13">
        <title>Shuchi Grover and Roy Pea. Computational thinking in k{12. a review of the state of the eld. Educational researcher, 42(1):38{43, Jan/Feb 2013.</title>
      </sec>
      <sec id="sec-3-14">
        <title>Shuchi Grover. \systems of assess</title>
        <p>ments" for deeper learning of
computational thinking in k-12. In
Annual Meeting of the American
Educational Research Association, pages 1{9,
Washington, DC, 2015. AERA.</p>
      </sec>
      <sec id="sec-3-15">
        <title>Shuchi Grover. Assessing algorithmic</title>
        <p>and computational thinking in k-12:
Lessons from a middle school
classroom. In Emerging research, practice,
and policy on computational thinking,
pages 269{288. Springer, 2017.</p>
      </sec>
      <sec id="sec-3-16">
        <title>Roxana Hadad and Kimberly A Lawless. Assessing computational thinking.</title>
        <p>In Encyclopedia of Information Science
[JSS13]
[Net13]
[RGMLR17]
[RWI10]
[SFH+12]</p>
      </sec>
      <sec id="sec-3-17">
        <title>Warren Nettleford. Primary school children learn to write computer code, jul 2013.</title>
      </sec>
      <sec id="sec-3-18">
        <title>Marcos Roman-Gonzalez, Jesus Moreno-Leon, and Gregorio Robles. Complementary tools for computational thinking assessment. In</title>
        <p>Proceedings of International
Conference on Computational Thinking
Education (CTE 2017), S. C Kong,
J Sheldon, and K. Y Li (Eds.). The
Education University of Hong Kong,
pages 154{159, 2017.</p>
      </sec>
      <sec id="sec-3-19">
        <title>Alexander Repenning, David Webb,</title>
        <p>and Andri Ioannidou. Scalable game
design and the development of a
checklist for getting computational thinking
into public schools. In Proceedings of
the 41st ACM Technical Symposium on
Computer Science Education, SIGCSE
'10, pages 265{269, New York, NY,
USA, 2010. ACM.</p>
      </sec>
      <sec id="sec-3-20">
        <title>Amber Settle, Baker Franke, Ruth</title>
        <p>Hansen, Frances Spaltro, Cynthia
Jurisson, Colin Rennert-May, and Brian
Wildeman. Infusing computational
thinking into the middle- and
highschool curriculum. In Proceedings of the
17th ACM Annual Conference on
Innovation and Technology in Computer
Science Education, ITiCSE '12, pages
22{27, New York, USA, 2012. ACM.
[VdAGMN06] Jan Van den Akker, Koeno
Gravemeijer, Susan McKenney, and Nienke
Nieveen. Educational design research.</p>
        <p>Routledge, 2006.
[Win14]
[Wor94]</p>
      </sec>
      <sec id="sec-3-21">
        <title>Jeannette M. Wing. Computational thinking bene ts society, January 2014.</title>
      </sec>
      <sec id="sec-3-22">
        <title>World Health Organization and others.</title>
        <p>Life skills education for children and
adolescents in schools. In Life skills
education for children and adolescents in
schools, pages 2pts{in. 1994.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>