<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enhancing a Theory-Focused Course Through the Introduction of Automatically Assessed Programming Exercises { Lessons Learned</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Melf Johannsen</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chris Biemann</string-name>
          <email>biemann@informatik.uni-hamburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universitat Hamburg, Language Technology Group, Vogt-Kolln-Stra e 30</institution>
          ,
          <addr-line>22527 Hamburg, Germany https://</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universitat Hamburg</institution>
          ,
          <addr-line>Vogt-Kolln-Stra e 30, 22527 Hamburg, Germany https://</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universitat Hamburg, Center for Optical Quantum Technologies</institution>
          ,
          <addr-line>Luruper Chaussee 149, 22761 Hamburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <abstract>
        <p>In this paper, we describe our lessons learned during the introduction of automatically assessed programming exercises to a Bachelor's level course on algorithms and data structures in the Winter semester 2019/2020, which is yearly taken by around 300 students. The course used to mostly focus on theoretical and formal aspects of selected algorithms and data structures. While still maintaining the primary focus of a theoretical computer science course, we introduce a secondary objective of enhancing programming competence by giving practical programming exercises based on select topics from the course. With these assignments, the students should improve their understanding of the theoretical aspects as well as their programming skills. The programming assignments were given in regular intervals during lecture period with a thematic alignment between assignments and lectures. To compensate for the new set of tasks, the workload of assignments on theoretical aspect was reduced. We describe the di erent experiences and lessons learned through the introduction and conduction of these exercises. A user study with 44 participants shows that the introduction was perceived well by the students, although improvements are still possible, especially in the area of feedback to the students.</p>
      </abstract>
      <kwd-group>
        <kwd>Automatic Assessment of Programming Exercises</kwd>
        <kwd>CodeRunner</kwd>
        <kwd>Lessons Learned</kwd>
        <kwd>Moodle</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>One of the key competences a student of computer science should possess at the
end of his or her study should be the competence to write computer programs.</p>
      <p>
        To support students in learning this important skills, many tools for
automatically assessed programming exercises have been developed over the last years
[
        <xref ref-type="bibr" rid="ref14 ref2">2,14</xref>
        ]. To help the students improve their programming skills, new automatically
assessed programming exercises were introduced in the course Algorithmen und
Datenstrukturen (algorithms and data structures) at the Universitat Hamburg,
taken by around 300 students in the Winter semester 2019/2020. In total, six
blocks of exercises were created, in which the students had to participate. In this
paper, we share our experiences and lessons learned when implementing these
programming exercises in practice.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        There are many publications about the details of di erent tools for automatic
assessment of programming tasks (e.g. see the reviews [
        <xref ref-type="bibr" rid="ref1 ref14 ref2 ref7">1,2,7,14</xref>
        ]). All of those
reviews have a slightly di erent focus on the topic of automatically assessed
programming exercises. While Caiza and Del Alamo [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] present a list of assessment
tools, Ihantola et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] discusses the technical features found in di erent
assessment software. Both Ala-Mutka [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and Souza et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] include methodological
aspects (e.g. testing for di erent quality measures like e ciency or test coverage
in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or specialisation of tools like quizzes or contests in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]) in their analysis.
      </p>
      <p>
        In comparison, literature of the actual experience of introducing these tools
into regular classes seems to be relatively sparse. However, there are publications
describing the experiences of introducing automatically assessed (programming)
tasks in regard to exercise design [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], plagiarism [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], resource usage [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
resubmission policies [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] (although they do not describe programming tasks, their tasks
assess the understanding of algorithms on a concept level), and even redesigning
of whole courses [
        <xref ref-type="bibr" rid="ref11 ref8">8,11</xref>
        ] including exams [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        In our work, we use the CodeRunner tool for automatic assessment. The tool
was developed by Lobb and Harlow [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Croft and England [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] described their
experiences of introducing Coderunner, however, their publication focuses on
the technical details and less on their actual experiences in deploying and using
CodeRunner.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Context and Prior State</title>
      <p>
        Currently, e-learning at Universitat Hamburg is mainly used for the distribution
of les (like lecture notes or exercise sheets) and for communication, to the
best knowledge of the authors. There are only few cases where the potential of
blended learning [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is used. One example of such a project is the CaTS project
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], in which our department participated. In that project, online self assessment
tests were developed for the class Formale Grundlagen der Informatik I und II
(theoretical foundations of computer science, level 1 and level 2).
      </p>
      <p>The goal of the Bachelor's level course Algorithmen und Datenstrukturen
(algorithms and data structures) is to teach the students the principles of e cient
algorithms, both in theoretical and practical terms. Each year, around 300
students participate in the course. Prior to the introduction of the programming
exercises, the main focus of the module was set on the theoretical and formal
aspects of selected algorithms and data structures. Because programming skills
were mainly taught in di erent modules, the practical aspects (sample
applications, programming tasks) were not discussed. With this development, our goal
is to blur the distinction of theoretical and practical courses, thereby allowing
students to implement theoretical concepts from scratch.</p>
      <p>
        In the course, the available e-learning platform Moodle4 was previously used
for sharing documents and for communication through a forum. In addition,
students were able to check the progress on their course achievements, these,
however, had to be input manually by the instructors. The introduction of online
tests in the form of automatically assessed programming exercises is a novelty
for the course. The implementation of these new exercises was done using the
CodeRunner plugin [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for Moodle.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Design and Deployment</title>
      <p>By developing these programming exercises, we wanted to allow the students to
deepen their understanding of the algorithms and data structures discussed in
the lecture. This was done by letting the students implement di erent algorithms
and sometimes let them use the algorithms to solve di erent tasks. One welcomed
side e ect was to improve the programming skills of the students through these
exercises. To compensate for the additional work caused by the new exercises,
the workload of assignments in the area of theory had to be reduced.</p>
      <p>All exercises were created based on the topics of the lecture. We created the
di erent tasks by rst de ning the requirements of the tasks. Based on these, we
chose suitable algorithms and data structures for the programming tasks. Those
were transformed into the actual task, the test cases and an example solution.
The same procedure was used to create example programs for the actual lecture
itself.</p>
      <p>It was required for the students to pass the exercises in order to complete
the course. As such, the students were externally motivated to complete the
programming exercises. One example of such a task from the point of view of the
students (including short explanations of all important user interface elements)
can be seen in Fig. 1. In total, 10 tasks were created, which were combined into
6 blocks. The students could choose whether they wanted to use Java or Python,
for each block the better result was counted. For each block, the students were
given two weeks to complete the tasks. For each task, the students had 10 tries to
develop a correct solution that passes all test cases, however, they could also test
their solution on a smaller set of pre-test cases. The test cases were composed of
corner cases (e.g. empty input, maximum input value), normal cases and random
tests. The random tests prevented the students from hard-coding the test results
4 https://moodle.org/
into their programs. All test cases were restricted in execution time and memory
usage, however, the provided limitations were more than enough to pass all test
cases even with ine cient solutions. Feedback to the students was only send
through the result of the test cases, since manual feedback would have put a lot
of additional work on the instructors and thus was not feasible. In addition, a
sample solution was provided for each task. The average length of the provided
sample solution, including source code comments, amounted to 24.5 and 19.7
lines of code for Java and Python, respectively, with both peaking at 41. All
tasks were perceived as easy by all instructors.</p>
      <p>To facilitate communication with the students, multiple ways of
communication were o ered, both for announcements as well as for questions. These include
a mailing list, Moodle-based communication (a forum as well as announcements)
and special tutorials, which could be joined by students at will.</p>
    </sec>
    <sec id="sec-5">
      <title>Lessons Learned</title>
      <p>While the technical setup did not pose notable issues and overall the students
were able to use the system and achieve their learning goals, we encountered
some issues. These are described below.
5.1</p>
      <sec id="sec-5-1">
        <title>Heterogeneity of Student Knowledge</title>
        <p>Due to the curricular structure of the Universitat Hamburg as well as possible
extracurricular activity, the knowledge of the students when starting the course
is highly diverse. Firstly, the students are enrolled in di erent study programs.
Because of this, there is not one single programming language everyone is trained
in. As a consequence, we had to develop the tasks in di erent programming
languages (Java and Python), which signi cantly increased the e orts, as this
does not only imply creating the tasks twice, but also requires modelling this on
the side of automatic score reporting. In addition, the students greatly diverged
in programming skill levels. While some perceived the tasks as quite di cult,
there was also a smaller group who found the tasks to be extremely easy.
5.2
CodeRunner adds an extra layer of abstraction between the student and the
system on which the code is actually run. This extra layer caused many problems
for the students.</p>
        <p>Once, it is not directly visible what exactly the underlying system is
doing, and especially what e ect the di erent user interface elements have on the
system. To reduce di culties, we used di erent counter-measures: a live
demonstration at the beginning of the semester, as well as a user manual that students
could consult on any questions. Still, the students had problems with the user
interface in the rst weeks.</p>
        <p>In addition, errors in students' solutions are not easy to debug. Although any
compiler errors or failed test cases are shown, the execution of the source code
could not directly be analysed by standard tools like a debugger. It has been
proven helpful to provide special source code les, which allowed the students
to develop solutions on their own computer by emulating the behaviour of the
system.</p>
        <p>Finally, students were quick to blame the system for any error instead of
searching them in their own solution. For example, one student has blamed
the system for not allowing enough execution time for his solution although he
programmed an in nite loop in his solution. Because of this and similar cases,
we had a high demand of support (see below).
5.3</p>
      </sec>
      <sec id="sec-5-2">
        <title>Students' Creativity</title>
        <p>We could observe the problems of many students to apply the knowledge they
gained in the lecture to the programming exercises. As a result, many students
tried to use their self-developed, creative, algorithms instead of using the
algorithms presented during the lecture. This was also the case for tasks like
'Implement algorithm X'. Often, these algorithms had many problems in di erent
cases (especially in corner cases), which causes malfunction in both normal
execution and our test cases. Because of this, it is proven to be especially important
to cover each possible cause of errors with its own respective test case, some
of which were hard to anticipate. This way, students could analyse each failed
test case individually and easily nd their errors. Whenever there was a cause
of error we did not anticipate (and therefore did not have a test case) we could
observe the students having more problems.</p>
        <p>In addition, there were cases where a student had problems with the random
test cases while at the same time they passed all other test cases. This shows
two things: There might be some hidden problems in the student's solution, and
there were some test cases we had missing. While we plan to improve our test
cases by collecting these issues, it is not always possible to avoid such problems
since tasks might have to be changed each year in order to ensure unseenness.
5.4</p>
      </sec>
      <sec id="sec-5-3">
        <title>High Demand of Support</title>
        <p>Although there were no major problems with the programming exercises and the
systems ran stable, there was still a high demand of support from the students
in the form of questions and support requests whenever they were not able to
solve an issue on their own. This includes for example questions concerning the
interpretation of the given task, technical di culties, issues with their solution,
and organizational questions. Since most of the aforementioned problems and
resolutions were very speci c to the students' solutions, the support was highly
individual and therefore caused high time and e ort demands.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Evaluation</title>
      <p>
        To evaluate the acceptance of the programming exercises by the students, we
conducted a user study. The study is carried out following Kreidl [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], who de ned
multiple variables (grouped into 4 categories) that contribute to the acceptance
of e-learning systems by students. For the evaluation, we used a modi ed version
of his questionnaire (which was originally in German, and we conducted the
user study in German). The scale used in the survey is inverted compared to
the original publication by Kreidl. The variables voluntariness and incentives
(the participation was mandatory to pass the course) and exam preparation (the
programming exercises were not relevant for the exam) were not tested for the
given reasons.
      </p>
      <p>Out of the 300 students that partook in the course, 44 students additionaly
participated in the user study on a voluntary basis. As can be seen in Tab. 1, the
programming exercises were accepted well by the students in general (all values
are around 2). However, improvements can be made especially in the area of
feedback to the students (variables availability of tasks and learning causes and
feedback to the students ). The value intensity of usage is low in comparison to
the others, however, this is expected, since it was intended that the students do
the programming exercises only a single time.</p>
      <p>We also evaluated the amount and di culty of the tasks. The students could
rate the di culty on a scale of 1 (too easy) to 5 (too di cult), with 3 equal to
being adequate. The amount of tasks was evaluated on a similar scale, from 1
(too many) to 5 (too few), with 3 equal to being a good number of tasks. The
tasks received a di culty of 3.0 ( = 0:7) whereas the amount received a rating
of 2.7 ( = 0:7). This shows that our tasks where perceived as having the right
number and di culty for the course.
7</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>In this paper, we described our experiences of introducing automatically assessed
programming exercises in a Bachelor's level course focusing on algorithms and
data structures in computer science with around 300 yearly participants. The
course is mostly focused on theoretical and formal aspects. Overall, the
introduction of the programming exercises was successful, although we experienced
some di culties in the area of the mixed prior knowledge of participants,
students' creativity, the extra abstraction layer of CodeRunner, and a high demand
of support. A user study shows that the programming exercises were accepted
by the students, although there is still room for improvement especially in the
area of feedback to the students concerning the speci c issues of their solutions.</p>
      <p>Currently, it is planned to continue the programming exercises in next year's
course. Improvements are especially planned for including better feedback. Since
manual feedback by instructors is not feasible for the course, it is planned to
improve feedback by both improving the test cases and the feedback included in
the test cases (e.g. purpose of the test case and common mistakes).</p>
      <p>Acknowledgements. This research was supported by MINTFIT Hamburg.
MINTFIT Hamburg is a joint project of Hamburg University of Applied
Sciences (HAW), HafenCity University Hamburg (HCU), Hamburg University of
Technology (TUHH), University Medical Center Hamburg-Eppendorf (UKE) as
well as Universitat Hamburg (UHH) and is funded by the Hamburg Authority
for Science, Research and Gender Equality.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ala-Mutka</surname>
            ,
            <given-names>K.M.:</given-names>
          </string-name>
          <article-title>A Survey of Automated Assessment Approaches for Programming Assignments</article-title>
          .
          <source>Computer Science Education</source>
          <volume>15</volume>
          (
          <issue>2</issue>
          ),
          <volume>83</volume>
          {
          <fpage>102</fpage>
          (
          <year>2005</year>
          ). https://doi.org/10.1080/08993400500150747
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Caiza</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Del Alamo</surname>
            ,
            <given-names>J.M.:</given-names>
          </string-name>
          <article-title>Programming assignments automatic grading: review of tools and implementations</article-title>
          .
          <source>In: 7th International Technology, Education and Development Conference (INTED2013)</source>
          . pp.
          <volume>5691</volume>
          {
          <issue>5700</issue>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Cheang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kurnia</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oon</surname>
          </string-name>
          , W.C.:
          <article-title>On automated grading of programming assignments in an academic institution</article-title>
          .
          <source>Computers &amp; Education</source>
          <volume>41</volume>
          (
          <issue>2</issue>
          ),
          <volume>121</volume>
          {
          <fpage>131</fpage>
          (
          <year>2003</year>
          ). https://doi.org/10.1016/S0360-
          <volume>1315</volume>
          (
          <issue>03</issue>
          )
          <fpage>00030</fpage>
          -
          <lpage>7</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Croft</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>England</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          : Computing with CodeRunner at Coventry University: Automated Summative Assessment of Python and C++ Code.
          <source>In: Proceedings of the 4th Conference on Computing Education Practice</source>
          <year>2020</year>
          .
          <source>CEP</source>
          <year>2020</year>
          (
          <year>2020</year>
          ). https://doi.org/10.1145/3372356.3372357
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Friesen</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          : Report: De ning Blended Learning. Received from https://www. normfriesen.info/papers/Defining_Blended_Learning
          <source>_NF.pdf on Apr 3rd</source>
          <year>2020</year>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Goethe-Universita
          <article-title>t - Computerbasiertes adaptives Testen im Studium</article-title>
          . https://www.studiumdigitale.uni-frankfurt.de/66776844/CaTS, last accessed:
          <volume>26</volume>
          .
          <fpage>02</fpage>
          .2020
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ihantola</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ahoniemi</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karavirta</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , Seppala,
          <string-name>
            <surname>O.</surname>
          </string-name>
          :
          <article-title>Review of Recent Systems for Automatic Assessment of Programming Assignments</article-title>
          .
          <source>In: Proceedings of the 10th Koli Calling International Conference on Computing Education Research</source>
          . pp.
          <volume>86</volume>
          {
          <fpage>93</fpage>
          . Koli Calling '
          <volume>10</volume>
          (
          <year>2010</year>
          ). https://doi.org/10.1145/1930464.1930480
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kaila</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kurvinen</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lokkila</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laakso</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          :
          <article-title>Redesigning an Object-Oriented Programming Course</article-title>
          .
          <source>ACM Transactions on Computing Education</source>
          <volume>16</volume>
          (
          <issue>4</issue>
          ) (
          <year>2016</year>
          ). https://doi.org/10.1145/2906362
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kreidl</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Akzeptanz und Nutzung von E-Learning-Elementen an Hochschulen. Grunde fur die Einfuhrung und Kriterien der Anwendung von E-Learning</article-title>
          .
          <source>Waxmann</source>
          (
          <year>2011</year>
          ), http://nbn-resolving.org/urn:nbn:de:
          <fpage>0111</fpage>
          -opus-82880
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Lobb</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harlow</surname>
          </string-name>
          , J.:
          <article-title>Coderunner: A Tool for Assessing Computer Programming Skills</article-title>
          .
          <source>ACM Inroads</source>
          <volume>7</volume>
          (
          <issue>1</issue>
          ),
          <volume>47</volume>
          {
          <fpage>51</fpage>
          (
          <year>2016</year>
          ). https://doi.org/10.1145/2810041
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lokkila</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaila</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karavirta</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salakoski</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laakso</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Redesigning Introductory Computer Science Courses to Use Tutorial-Based Learning</article-title>
          .
          <source>In: EDULEARN16 Proceedings</source>
          . pp.
          <volume>8415</volume>
          {
          <fpage>8420</fpage>
          . 8th International Conference on Education and
          <article-title>New Learning Technologies (</article-title>
          <year>2016</year>
          ). https://doi.org/10.21125/edulearn.
          <year>2016</year>
          .0837
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Malmi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karavirta</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Korhonen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikander</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>Experiences on Automatically Assessed Algorithm Simulation Exercises with Di erent Resubmission Policies</article-title>
          .
          <source>Journal on Educational Resources in Computing</source>
          <volume>5</volume>
          (
          <issue>3</issue>
          ), 7:
          <issue>1</issue>
          {7:
          <issue>23</issue>
          (
          <year>2005</year>
          ). https://doi.org/10.1145/1163405.1163412
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Rajala</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaila</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Linden</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kurvinen</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lokkila</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Laakso</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salakoski</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Automatically Assessed Electronic Exams in Programming Courses</article-title>
          .
          <source>In: Proceedings of the Australasian Computer Science Week Multiconference</source>
          . pp.
          <volume>11</volume>
          :
          <issue>1</issue>
          {
          <issue>11</issue>
          :
          <article-title>8</article-title>
          . ACSW '
          <volume>16</volume>
          (
          <year>2016</year>
          ). https://doi.org/10.1145/2843043.2843062
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Souza</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Felizardo</surname>
            ,
            <given-names>K.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barbosa</surname>
            ,
            <given-names>E.F.</given-names>
          </string-name>
          :
          <article-title>A Systematic Literature Review of Assessment Tools for Programming Assignments</article-title>
          .
          <source>In: 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET)</source>
          . pp.
          <volume>147</volume>
          {
          <issue>156</issue>
          (
          <year>2016</year>
          ). https://doi.org/10.1109/CSEET.
          <year>2016</year>
          .48
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>