=Paper= {{Paper |id=Vol-2712/paper4 |storemode=property |title=On Using Hybrid Pedagogy as Guideline for Improving Assessment Design |pdfUrl=https://ceur-ws.org/Vol-2712/paper04.pdf |volume=Vol-2712 |authors=Christian Köppe,Rody Middelkoop |dblpUrl=https://dblp.org/rec/conf/ectel/KoppeM19 }} ==On Using Hybrid Pedagogy as Guideline for Improving Assessment Design== https://ceur-ws.org/Vol-2712/paper04.pdf
     On Using Hybrid Pedagogy as Guideline for
           Improving Assessment Design

          Christian Köppe1[0000−0003−0326−678X] and Rody Middelkoop2
                       1
                      Utrecht University, Utrecht, Netherlands
                                  c.koppe@uu.nl
        2
          HAN University of Applied Sciences, Arnhem/Nijmegen, Netherlands
                             rody.middelkoop@han.nl



        Abstract. An essential element of higher education is the assessment of
        student work, either formative for improving learning or summative for
        looking back at what has been achieved. However, applying assessments
        as part of larger assignments is prone to some phenomena such as stu-
        dents not being aware of the quality of their work during the assignment
        or assessing at non-suitable moments in time, resulting in unnecessary
        low grades. In this work we discuss dichotomy-thinking as possible rea-
        son and how Hybrid Pedagogy as design guideline can help with finding
        appropriate solutions. Besides discussing this approach in general we
        also provide concrete examples of how it was applied for the design of
        assessment strategies in a course on software engineering.

        Keywords: assessment design · hybrid pedagogy.


1     Introduction
Assessments, both formative and summative, form an essential element in higher
education: they provide insight in the outcomes of student learning, offer oppor-
tunities for feedback and check whether learning goals have been met. A com-
mon course design comprises various assessment types such as written exams
or obligatory tests, and often there are one or more larger and longer-running
assignments as well. These assignments are usually assessed after the final ver-
sion has been handed in. Additionally, there are some moments when feedback
is given or even an intermediate assessment, based on the student’s current work
status.
    While this is an established approach, we still can observe some phenomena
which potentially have a negative impact on student performance and hence hin-
der learning, even though in various degrees. With respect to timing and quality
issues, the following specific examples of such phenomena can be recognized:

  – Timing Phenomenon: Snapshot Assessments
    In many longer-running assignments such as projects, case studies or research
    trajectories, there are some fixed assessment moments: usually at the end of
    a course and somewhere midterm or after regular periods in time, e.g. every




Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).
 30    C. Köppe and R. Middelkoop

   two weeks. These midterm assessments often result in lower grades because
   they are a snapshot in time, looking at work in progress. Many teachers use
   this approach for showing the students that they need to work harder and
   deliver better quality, hereby hoping that this motivates the students. But
   even though it might become obvious to students where their shortcomings
   are, the grade is given and usually can’t be improved. This is unnecessarily
   frustrating, especially if the students actually are able to deliver much better
   quality, just not at this snapshot moment. It seems that in that case the
   assessment is somehow disconnected from the desired learning outcomes.
 – Timing Phenomenon: Feedback Timing
   In many cases, students get feedback some time after they have handed in
   some work. Usually students have already continued working on the project
   or focus on other assignments, which makes it more difficult for them to
   relate the feedback to their work as it is not present in their heads anymore.
   However, there are also occasions where feedback is given directly, e.g. as
   part of an assessment session or during a work group. These moments are of
   more value as the feedback is easier to directly relate to the work, but these
   moments are usually also more rare as they require more time of the teacher.
   They furthermore often do not fit into the planning of both students and
   teachers.
 – Quality Phenomenon: Quality Unawareness
   Students often seem unsure about the quality of their final work or have a
   rough feeling about the quality but are unable to predict the grade. Even if
   they apply self-assessment they are not sure if the teacher will come to the
   same result. Therefore the final grades often come as surprise, either being
   lower or higher than expected which might result in decreasing confidence
   in a fair grading system.
 – Quality Phenomenon: Little Value of Feedback
   Feedback which is given by teachers as part of assessments is not always
   experienced as valuable by students. It is often for looking back and not
   experienced as relevant for future work. If the feedback focus is on the quality
   of the work (which it should), then it is often experienced as not directly
   relevant for getting a higher grade. So its value is limited in a grade-centered
   educational system as it is present in most higher educational institutions.

    There likely are many reasons for these phenomena. We assume that one
reason is the lack of awareness of alternatives. As a consequence, educational de-
signers of such assignment and assessment strategies tend to rely on well-known
standard solutions. A potential cause for the lack of alternative educational de-
sign solutions might be a proneness to thinking in dichotomies, which was also
discussed for various other educational domains (see e.g. [14, 17, 21, 6]).
    We believe that consciously intermingling these dichotomies—the core of
the concept of Hybrid Pedagogy—might help with opening the space for new
solutions which positively influence the aforementioned phenomena and there-
fore can help with improving the design of educational strategies. In this work
we discuss how this hybrid approach was applied for the design of assessment
    On Using Hybrid Pedagogy as Guideline for Improving Assessment Design      31

strategies which potentially have a positive impact on student performance and
in consequence improve learning.
    In the next section, we will shortly introduce dichotomies in education and
describe the concrete dichotomies which have impact on the aforementioned
phenomena. This is followed by an introduction to Hybrid Pedagogy and the
description of concrete examples of how hybridity was used for finding alternative
solutions which potentially influence the phenomena as described above. The
paper concludes with a summary and an outlook on future work.


2    Dichotomies in Education

A dichotomy can generally be defined as two things that are on the outer ends
of a specific dimension and often contrasting or opposing each other. Such di-
chotomies can be encountered in many aspects of education. In mathematics
education, Sierpinska discussed the dichotomy of practical versus theoretical
thinking and the issues related to this [17]. Warren et al. explore the dichomoty
of everyday versus scientific modes of thinking in science learning [21]. Heames
and Service discuss a variety of dichotomies which influence the teaching tech-
niques in business education [6]. Stommel provides more generic examples from
education such as physical versus digital learning spaces, informal versus formal
learning contexts, individual teachers and students versus collaborative commu-
nities, academic products versus learning process, or learning in schools versus
learning in the world [18].
    Following these examples, we can identify some specific dichotomies which
might have impact on the above mentioned phenomena. We will discuss these
also with respect to the timing and quality aspects.

 – Timing Dichotomy: Planning for Organization versus Planning for Learning
   Fixed assessment moments are often dictated by organizational planning is-
   sues and not intended as milestones, related to quality aspects. They are
   dictated by the availability of the assessor, a certain moment based on the
   total duration of the assignment (such as after each third or at the half of
   the assignment duration), academic holiday planning and other similar is-
   sues. This seems understandable as most educational institutions work with
   fixed time structures such as two semesters per year, courses with a delim-
   ited duration of multiple weeks to few months or other timely restrictions.
   It also happens that students take courses in parallel and that, with the
   goal of making studying more easy, assessment moments are distributed in
   a way so that students do not get overworked. The contrast would be to
   adjust assessment to student’s learning, meaning that assessment happens
   when students have achieved a learning goal or created a product that ful-
   fills some pre-defined quality criteria. However, there are only few examples
   related to this planning for learning-end of the dimension, one of them being
   Programmatic Assessment [16].
32       C. Köppe and R. Middelkoop

  This dichotomy is an example where most educators choose, for above men-
  tioned reasons, mainly one dimension: planning for organization. This is
  likely one of the reasons for Snapshot Assessments.
– Timing Dichotomy: Synchronous versus Asynchronous Feedback
  Most assignments contain the delivery of some work products, and students
  get feedback from teachers on these products. There usually are two feed-
  back modes: (1) synchronous feedback where feedback is given directly and
  immediately on some product and the feedback receiver and giver interact
  with each other and (2) asynchronous feedback where feedback is provided
  some time after the product has been handed in and no direct interaction
  takes place. Both have advantages and disadvantages.
  With synchronous feedback, which is usually given in working groups or
  face-2-face sessions, feedback given is more relevant because it arrives at a
  teachable moment. Students are still engaged in working on the product, they
  are still thinking about the task domain [4]. The disadvantage of synchronous
  feedback is that is costs more time, the teacher is not able to thoroughly look
  at the product, and planning is not easy as most teachers do not get sufficient
  time for providing larger amounts of synchronous feedback. The latter is also
  related to the issue of planning for organization.
  Asynchronous feedback offers the advantage that the teacher has more time
  to deeply assess the product and to provide more detailed feedback. Planning
  is not a big issue, it usually does not matter much if the feedback arrives a
  bit earlier or later. However, when feedback arrives students likely already
  continued working on the product or even moved on to the next assignments
  or learning tasks. In both cases the feedback is arriving when they are not
  engaged in working on the product anymore and therefore of less value.
  The spare use of synchronous feedback, mainly because of planning issues,
  and the disadvantages of the more often applied asynchronous feedback mode
  are contributing to the phenomenon Feedback Timing.
– Quality Dichotomy: Teacher versus Student Grade Determination Responsi-
  bility
  Some dichotomies are not directly recognizable as such. When thinking about
  the responsibility of determining the grades for students, most teachers would
  not dare to argue that this responsibility lies anywhere else than by them-
  selves. Likely reasons are that they fear loss of control on the quality of the
  work or grade inflation due to student’s over-assessment. This means in con-
  sequence that the students never will be fully aware of what quality level
  they have achieved with their product and in consequence what grade they
  could expect for it. Self-assessment does provide some help here, but is of-
  ten applied independently from actual grading3 and therefore only partially
  helpful.
  There are however examples where teachers have students indeed grade their
  own work. If done well, most authors report various positive effects of self-
  grading such as quicker and more detailed feedback for students, deeper
3
    For example, the widely used learning management system Blackboard has two
    distinct modules: one for assignments with grading and one for self-assessments.
    On Using Hybrid Pedagogy as Guideline for Improving Assessment Design           33

   understanding of the topic, and greater awareness of own strengths, progress,
   and gaps [3, 5, 19].
   Not involving students in the grading process is contributing to Quality
   Unawareness
 – Quality Dichotomy: Formative versus Summative Assessment
   As teachers we often either give feedback only—intended for supporting
   learning and improvement—or we provide a grade with some justification,
   usually after some work has been finished. These relate to the assessment
   functions of being either formative or summative.
   Both are valuable but also have some shortcomings: even though formative
   assessment helps the students to know where they stand, they are dependent
   on the teacher to provide them with this information. This feedback is also
   often experienced as todo-list by the students, potentially resulting in the
   effect that elements of their work where no specific feedback is given on
   are seen as good enough. Furthermore, the feedback which is valued most
   by students is which parts already are good enough for getting a sufficient
   grade. This kind of feedback does not trigger a growth mindset as they likely
   won’t do more work on parts which are already of sufficient quality. It keeps
   the students reactive.
   Summative assessment on the other hand is mainly for looking back. Its
   relevance for the students is often limited, as the work has been finished and
   the students usually already moved on to the next assignment or course.
   Only intrinsically motivated students see the value of such feedback as being
   relevant for future work as well.
   The distinction between formative and summative assessments contributes
   to student’s experience of Little Value of Feedback.

   The above described dichotomies are four examples which we assume to have
impact on the phenomena. In the next section we will discuss how these di-
chotomies can be addressed by using hybridity as explicit design guideline.


3    Hybrid Assessment Design
As also described by Heames and Service, applying kaleidoscope thinking—using
another viewpoint when something seems difficult from a certain point of view—
is a good start for developing solutions [6]. Hybridity, or Hybrid Pedagogy, can
be such a different viewpoint. It refers to “a mixture of different parts into a new
breed, form or culture” and “in higher education implies a pedagogical design
that mixes different discourses, formats, tools, people, contexts etcetera” [7].
    Rorabaugh and Stommel describe hybridity as follows:
         ”[...] hybridity suggests hesitation at a threshold. Hybridity is not an
     attempt to neatly bridge the gap, but extends the moment of hesitation
     and thereby confuses easy categorization. And, as we allow two things
     to rub against each other, two things that might not otherwise touch, we
     invite them to interact, allowing synthesis (and even perforation) along
34      C. Köppe and R. Middelkoop

     their boundaries. As the digital and analog—the physical and virtual—
     commingle, we must let go of the containers for learning to which we’ve
     grown accustomed. We must open to random acts of pedagogy—to con-
     nections that are, like the web, associative and lively but sometimes
     violent and deformed. In this, hybridity is not always safe, moving in-
     cessantly (and dangerously) toward something new—something as yet
     undetermined.” [15, unpaginated]

    Hybrid Pedagogy is not a new concept and there are a growing number of
examples of hybrid practices in education [13, 9, 8]. However, besides describing
existing hybrid practices it can also help educational designers as design tool.
Applying hybridity as guideline might help with widening the solution space
through dissolving existing dichotomies. This was also applied during assessment
design for a semester on object-oriented software engineering for addressing the
phenomena described earlier in this work. The resulting solutions are described
in more detail in the next sections. We hereby follow the flow of application
of these solutions instead of discussing them separated into timing and quality
aspects.

 – Solution: Self-Grading
   In the example in Figure 1, the unused solution space suggests to share the
   responsibility of determining the grades with the students. This could be
   done by sharing the responsibility with them (in various degrees) or even by
   completely moving it to the students (as applied in self-grading).




Fig. 1. Explicit exploration of unused solution spaces (in green) towards students being
responsible


     The semester we designed was for the second year of a part-time study Com-
     puter Science. The students were a diverse group with many already doing
     professional work in the field. Part of this semester was a long-running case
     study with various aspects of software engineering such as requirements elici-
     tation or software design. As we wanted to integrate academic and workplace
     learning and wanted to increase the value of the study for them, we also ap-
    On Using Hybrid Pedagogy as Guideline for Improving Assessment Design        35

   plied the hybrid practice of Bring Your Own Assignment4 . This means
   that the students could decide themselves about the content of the case study
   as long as they were able to fulfill the assessment criteria described in a set
   of rubrics.
   However, assessing these work products would have cost the teachers much
   more time due to the potential variety of used techniques and application
   domains. Besides that we also wanted to help students being more aware of
   the quality of their products. The solution was to put the initial responsibility
   for grading to the students themselves and moving the responsibility of the
   teacher to determining if the result of this self-grading was correct. Whenever
   students think they achieved a certain quality level in a product, based on
   a self-assessment using the provided rubrics, they were encouraged to apply
   self-grading and hand in a grading request.
   Applying this self-grading helped the students to become more aware of
   the quality of their work. This effect was increased by adding the following
   practice of grade motivation.
 – Solution: Grade Motivation
   To make sure that this self-grading is done appropriately, we added the
   requirement that not only a grade (based on rubrics) had to be requested,
   but that also a sufficient motivation had to be added in order to show that
   the quality of the work is in accordance with the rubric quality level and
   associated grade. This motivation had to be not just a repetition of the
   rubric descriptions, but a thorough underpinning of the achieved quality
   level. Figure 2 shows an example of a complete grading request including (1)
   for whom it was, (2) for which assignment (the case study) and rubric, (4)
   the requested grade according to the quality description in the rubric, (4)
   the motivation for the grade, (5) the reference to the actual work product,
   and (6) a link to the grading queue tool (see Grading Queue solution below).




Fig. 2. Example of a grading request (in Dutch) including the grade motivation (#4),
adapted from [11]

4
    Bring Your Own Assignment [13]: Students are less motivated to work on offered
    standard-assignments, so have them work on assignments they proposed themselves.
36      C. Köppe and R. Middelkoop

   Having to provide this motivation increased the awareness of the quality of
   their work, as they had to determine it much more deeply and explicitly.
   This could in consequence also lead to a general improvement of their self-
   assessment skills, which is part of future research.
 – Solution: Continuous Assessment
   The first idea during designing the semester was to define some of fixed
   assessment moments. But when looking at it using the hybridity viewpoint,
   the option came up that assessments can take place whenever students think
   they’ve achieved some certain quality levels for (parts of) their work. This
   idea was also triggered by the usage of learning outcomes and the fact that
   some of the students already created some products, e.g. at their professional
   work, which could serve as evidence that they already achieved the learning
   outcomes. So we interchanged the snapshot moments with student-defined
   milestones.




Fig. 3. Planning of the semester, the blue post its on the left show the introduction of
an assignment, the yellow post its on the right determine the final deadline, students
were encouraged to grade themselves between these moments as often as they want



     In consequence, grading is not limited to only looking back at some finished
     work, but was also applied to partial work results whenever these reached
     some pre-specified quality levels as described in the rubrics. The grades are
     combined with feedback which is still relevant to the work as it can be
     used for improvement until the final deadline. This way, grades and the
     associated feedback are used for looking back and forward, making them a
     more valuable combination.
    On Using Hybrid Pedagogy as Guideline for Improving Assessment Design          37

 – Solution: Grading Queue
   While synchronous direct feedback is valuable, it is often hard to realize in
   sufficient quantity. The idea was to make asynchronuous feedback as syn-
   chronuous as possible, hereby combining the advantages of both. In the ex-
   ample in Figure 4, this means that feedback does not have to be either syn-
   chronous or asynchronous, but that it also could be asynchronous in such a
   timely manner that it feels more synchronous (and also has the benefits of
   synchronous feedback).




Fig. 4. Explicit exploration of unused solution spaces (in green) which are neither
clearly synchronous or asynchronous


   The resulting practice is a Grading Queue5 . After students performed a
   Self-Grading, including a Grade Motivation, they had to add an issue to
   the queue (in our case a Kanban board), hereby letting the teachers know
   that they had performed a self-grading and are now waiting for feedback
   (see Figure 5 for an example). The teachers then picked the longest waiting
   requests, examined them and returned them with corresponding feedback.
   The effect was that most grading requests were handled not longer than 1 or
   2 days after they were handed in. This way the feedback that came with the
   handling was given close to the moment of finishing that part of the work. In
   consequence this feedback was given during learning and students could still
   act on it, some of the characteristics of effective feedback [2]. This solution
   therefore addressed the issue of Feedback Timing.
 – Improvement Encouragement
   Another important aspect for adding value to the feedback is to have students
   react on it. The above described solutions allowed such reactions. Additional
   elements of the applied assessment strategy comprised these practices:
     • Act On Feedback [20] - Applied for closing the feedback loop by
       making sure that students have time to act on the feedback they have
       been given.
5
    Grading Queue (aka Grading Request Kanban) [10]: Provide an easily ac-
    cessible overview of all open grading requests, sorted by waiting time. Handle the
    grading requests in a structured, timely, and transparent manner.
38      C. Köppe and R. Middelkoop




              Fig. 5. Example of Grading Queue (adapted from [11])


     • Grade It Again, Sam [1] - Core of this practice is to permit your
       students to change and re-submit an assignment for re-evaluation and
       re-grading, after you have graded it and provided feedback.-
     • Go For Gold [10] - Encourage the students to continue improving
       their work, even—or especially—when they already acquired a sufficient
       grade for it.
     • Repair It Yourself [12] - Let students correct their wrong or incorrect
       solutions, so that they understand better how to do it right.

4    Conclusion
In this work we described how we applied Hybrid Pedagogy as guideline for ad-
dressing some remaining phenomena of standard assessment strategies. First ex-
periences show that these practices indeed help with addressing the phenomena.
However, further research is needed to more thoroughly evaluate their effective-
ness.
    We believe that explicitly using hybridity as guideline during educational
design can help to widen the solution space and to identify potential alternative
practices that help to address existing challenges in educational strategies. Ex-
ploring this approach will also be part of future work. Further research is needed
to determine if and to what extent the approach of using Hybrid Pedagogy as
design guideline is also applicable in other educational domains.

References
 1. Bergin, J., Eckstein, J., Völter, M., Sipos, M., Wallingford, E., Marquardt, K.,
    Chandler, J., Sharp, H., Manns, M.L. (eds.): Pedagogical Patterns: Advice for
    Educators. Joseph Bergin Software Tools, New York, NY, USA (2012)
  On Using Hybrid Pedagogy as Guideline for Improving Assessment Design           39

 2. Chappuis, J.: Seven Strategies of Assessment for Learning. Pearson College Div
    (2014)
 3. Crowell,     T.L.:    Student      Self    Grading:     Perception   vs.    Real-
    ity.    American      Journal     of    Educational     Research    3(4),    450–
    455             (2015).            https://doi.org/10.12691/EDUCATION-3-4-10,
    http://pubs.sciepub.com/education/3/4/10/
 4. Dow, S., Kulkarni, A., Klemmer, S., Hartmann, B.: Shepherding the
    crowd yields better work. In: Proceedings of the ACM Conference on
    Computer Supported Cooperative Work, CSCW. pp. 1013–1022 (2012).
    https://doi.org/10.1145/2145204.2145355
 5. Edwards, N.M.: Student Self-Grading in Social Statistics. College Teach-
    ing 55(2), 72–76 (apr 2007). https://doi.org/10.3200/CTCH.55.2.72-76,
    http://www.tandfonline.com/doi/abs/10.3200/CTCH.55.2.72-76
 6. Heames, J.T., Service, R.W.: Dichotomies in Teaching, Application, and
    Ethics. Journal of Education for Business 79(2), 118–122 (nov 2003).
    https://doi.org/10.1080/08832320309599099
 7. Hilli, C., Nørgård, R.T., Aaen, J.H.: Designing Hybrid Learning Spaces in
    Higher Education. Dansk Universitetspædagogisk Tidsskrift 15(27), 66–82 (2019),
    https://tidsskrift.dk/dut/article/view/112644
 8. Kohls, C.: Hybrid learning spaces. In: Proceedings of the VikingPLoP 2017 Con-
    ference on Pattern Languages of Program - VikingPLoP. pp. 1–12. ACM Press,
    New York, New York, USA (2017). https://doi.org/10.1145/3158491.3158505,
    http://dl.acm.org/citation.cfm?doid=3158491.3158505
 9. Kohls, C., Nørgård, R.T., Warburton, S.: Sharing is Caring. In:
    Proceedings of the 22nd European Conference on Pattern Lan-
    guages of Programs. pp. 34:1—-34:6. EuroPLoP ’17, ACM, New
    York,     NY,       USA     (2017).      https://doi.org/10.1145/3147704.3147741,
    http://doi.acm.org/10.1145/3147704.3147741
10. Köppe, C., Manns, M.L., Middelkoop, R.: Educational Design Patterns for
    Student-Centered Assessments. In: Preprints of the 26th Conference on Pattern
    Languages of Programs, PLoP’19. Ottawa, Canada (2019)
11. Köppe, C., Manns, M.L., Middelkoop, R.: The Pattern Language of Incremental
    Grading. In: Proceedings of the 25th Conference on Pattern Languages of Pro-
    grams, PLoP’18. Portland, OR, USA (2019)
12. Köppe, C., Niels, R., Holwerda, R., Tijsma, L., Van Diepen, N., Van Turn-
    hout, K., Bakker, R.: Flipped classroom patterns: designing valuable in-class
    meetings. In: Proceedings of the 20th European Conference on Pattern Lan-
    guages of Programs - EuroPLoP ’15. vol. 08-12-July, pp. 1–17. ACM Press,
    New York, New York, USA (2015). https://doi.org/10.1145/2855321.2855348,
    http://dl.acm.org/citation.cfm?doid=2855321.2855348
13. Köppe, C., Nørgård, R.T., Pedersen, A.Y.: Towards a pattern language
    for hybrid education. In: Proceedings of the VikingPLoP 2017 Conference
    on Pattern Languages of Program - VikingPLoP. pp. 1–17. ACM Press,
    New York, New York, USA (2017). https://doi.org/10.1145/3158491.3158504,
    http://dl.acm.org/citation.cfm?doid=3158491.3158504
14. Lyons, L.L., Freitag, P.K., Hewson, P.W.: Dichotomy in thinking, dilemma
    in actions: Researcher and teacher perspectives on a chemistry teaching prac-
    tice. Journal of Research in Science Teaching 34(3), 239–254 (mar 1997).
    https://doi.org/10.1002/(SICI)1098-2736(199703)34:3¡239::AID-TEA3¿3.0.CO;2-
    T
40     C. Köppe and R. Middelkoop

15. Rorabaugh,       P.,     Stommel,      J.:     Hybridity,    pt.      3:    What
    Does      Hybrid      Pedagogy     Do?       -   Hybrid     Pedagogy       (2012),
    http://www.digitalpedagogylab.com/hybridped/hybridity-pt-3-what-does-hybrid-
    pedagogy-do/
16. Schuwirth, L.W., Van Der Vleuten, C.P.: Programmatic assessment:
    From assessment of learning to assessment for learning. Medical Teacher
    33(6), 478–485 (jun 2011). https://doi.org/10.3109/0142159X.2011.565828,
    http://www.tandfonline.com/doi/full/10.3109/0142159X.2011.565828
17. Sierpinska, A.: On Practical and Theoretical Thinking and Other False Dichotomies
    in Mathematics Education. In: Hoffmann, M., Lenhard, J., Seeger, F. (eds.) Ac-
    tivity and Sign, pp. 117–135. Springer-Verlag, New York (2005)
18. Stommel, J.: Hybridity part 2, what is hybrid pedagogy (2012),
    https://hybridpedagogy.org/hybridity-pt-2-what-is-hybrid-pedagogy/
19. Strong, B., Davis, M., Hawks, V.: Self-Grading In Large Gen-
    eral Education Classes: A Case Study. College Teaching 52(2),
    52–57         (apr        2004).       https://doi.org/10.3200/CTCH.52.2.52-57,
    http://www.tandfonline.com/doi/abs/10.3200/CTCH.52.2.52-57
20. Warburton, S., Bergin, J., Kohls, C., Köppe, C., Mor, Y.: Dialogical assessment
    patterns for learning from others. In: Proceedings of the 10th Travelling Confer-
    ence on Pattern Languages of Programs - VikingPLoP ’16. pp. 1–14. ACM Press,
    New York, New York, USA (2016). https://doi.org/10.1145/3022636.3022651,
    http://dl.acm.org/citation.cfm?doid=3022636.3022651
21. Warren, B., Ogonowski, M., Pothier, S.: “Everyday” and “Scientific”:
    Rethinking Dichotomies in Modes of Thinking in Science Learning. In:
    Nemirovsky, R., Rosebery, A.S., Solomon, J., Warren, B. (eds.) Every-
    day Matters in Science and Mathematics, pp. 129–158. Routledge, New
    York, NY, USA (dec 2004). https://doi.org/10.4324/9781410611666-11,
    https://www.taylorfrancis.com/books/e/9781410611666/chapters/10.4324/9781410611666-
    11