=Paper=
{{Paper
|id=Vol-2141/paper2
|storemode=property
|title=Learning Technology-enabled (Meta)-Cognitive Scaffolding for Enabling Students to Learn Aspects of Written Argumentation
|pdfUrl=https://ceur-ws.org/Vol-2141/paper2.pdf
|volume=Vol-2141
|authors=Noureddine Elouazizi,Gulnur Birol,Gunilla Oberg
}}
==Learning Technology-enabled (Meta)-Cognitive Scaffolding for Enabling Students to Learn Aspects of Written Argumentation==
Learning technology-enabled (meta)-cognitive scaffolding to
support learning aspects of written argumentation
Noureddine Elouazizi1, 2 Gunilla Oberg2 Gulnur Birol1, 2
Skylight1, Faculty of Science2, UBC, Faculty of Science2, UBC, Skylight1, Faculty of Science 2,
Vancouver BC, Canada Vancouver BC, Canada UBC, Vancouver BC, Canada
noureddine.elouazizi@science.ubc.ca goberg@ires.ubc.ca birol@science.ubc.ca
Abstract students to effectively develop argumentation skills, they must
1
explicitly learn how to argue and reason [22]; [18]. This is because
This paper reports on an AI-informed and NLP-based work in to develop or critique an argument, students need to explicitly learn
progress. It shares the technology, educational and cognitive how to advance claims, take stances, justify ideas they hold, and be
approaches for enabling science students to engage with automated challenged about the ways they construct their arguments [19];
(AI) personalized (meta)-cognitive scaffolding to learn aspects of [46]. Hence, to develop their argumentation skills, students need to
written scientific argumentation. We briefly report on the features gain an understanding of the meta-linguistic and meta-cognitive
and functionalities of MindWare technology and preliminary and features of argumentation. Explicit teaching of written
brief results of a small-scale pilot to gauge the impact of argumentation in science might, however, seem an overwhelming
technology-mediated scaffolding on students’ learning of how to challenge as it requires both content knowledge and knowledge
argue (in written form). about how to structure a written argument.
CCS Concepts •Computing methodologies ➝ Cognitive Cognisant of these challenges, we developed a learning
technology, dubbed MindWare, to provide iterative formative
computing
feedback on written argumentation as a support for instructors and
Keywords Cognitive Computing, Learning Technologies, students at our university. In this paper, we: (a) provide a brief
Argumentation, Natural Language Processing, Science Education. overview of the pedagogical, computational and cognitive
approaches that the learning technology is based on and (b) briefly
report on the preliminary results of a small-scale pilot of the tool.
1 Introduction
2 Personalized Learning Environments and
Research in the area of metacognition and scaffolding for Scaffolding
learning emphasizes the need to provide adequate, sufficient and
timely external support to enable the enacting of the students’ Personalized learning is a pedagogical approach that puts the
metacognitive processes [1]; [14]; [29]. The past few years have learner, their progress, and their learning at the heart of the
seen a surge in research related to technology-mediated assessment pedagogical experience [8]. This approach allows students to
of written output by foreign language learners and learning proceed at their own learning pace, and can be supported by a
analytics-informed reflective writing [36]; [15]; [16]; [10]; [3]; combination of human and automated processes. The use of
[34]. The use scaffolded automated feedback to support automated processes requires technologies that give students
metacognitive learning of written argumentation is, however; an control, actionable information, and feedback, and allows them to
underexplored domain. This work is a contribution to this domain, take responsibility for their own learning. When used in a course,
with a specific focus on application in the context of science learning technologies that support personalized learning are
undergraduate education. expected to monitor individual students’ progress at a micro-level,
Most commonly, scientists learn to develop a written scientific and supply automatic feedback [8].
argument by mimicking their supervisor, peers and scholarly The pedagogy of learning to argue and arguing to learn [36];
papers in their discipline. It is increasingly recognized that for [10], suggests that personalized learning environments need to
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the first author, Noureddine Elouazizi, reachable at
noureddine.elouazizi@science.ubc.ca.
© 2018 Copyright held by the owner/author(s).
Anonymous Author(s)
cater to both the cognitive and the meta-cognitive aspects of through diagramming argumentation [19]; [43], and to enable
learning to argue. There is reason to believe that such an approach scaffolding and argumentative communication through
lends itself to pedagogically sound scaffolding [48]. We define visualization [44]. In parallel, with this work on how to (re)present
scaffolding as providing need-based assistance to students. an argument, the last two decades have also witnessed the
Effective scaffolding requires that the why, the what and the how of emergence of advanced techniques for mining different aspects of
the scaffolding is related to the expected assessment methods and argumentation from text. This includes the automatic classification
learning outcomes [2]. In our case, this included explicit of argument components [34]; [10]; [35], the identification of
scaffolding of the usages of the argumentation voices of hedging, argumentation structures [45], and the separation of argumentative
stancing, and logical connectors in written argumentation, as from non-argumentative text units [14]; [42].
produced by several drafts of essays written by students as parts of We build on these general approaches to mining and
their formative assessment in a First-Year Seminar (SCIE 113) representing aspects of argumentation, and on the specific insights
course where students learn to construct and deconstruct (scientific) that relate to how computational argumentation methods can be
arguments [5]. used to analyze essays for pedagogical purposes. In this respect, the
general computational argumentation method that we have adopted
3 The metacognition of argumentation relates to that of Persing and Ng [27], Song et al. [34], Walton et
al., [42] and Klebanov et al [28]. We share with these scholars the
There are at least three approaches to argumentation: (a) goals of extracting argument structures from essays by recognizing
argumentation as a logical product (b) argumentation as a rhetorical (structural) argument components and jointly modeling their types
process and (c) argumentation as an epistemic tool [6]. We adopt and relations between them.
the perspectives in (b) and (c). We assume that written language is MindWare (our software), a beta version at this point, has two
the direct cognitive by-product that externalizes how students build clusters of functionalities one for the students and one for the
arguments supported by evidence. We define argumentation as a instructors. The instructors in our educational context are scientists
complex meta-cognitive act produced by a writer, and evaluated by and do not have any training in language sciences and
a reader. Assuming that language is core to learning and that argumentation analysis per se. The usage of MindWare is intended
thought and language are inseparable [38], examining students’ to complement the feedback provided by the instructors, such that
argumentation offers opportunities for gaining insights into how they can focus their feedback on content, such as the quality of the
students engage in scientific reasoning. evidence provided in support of the argument. The software is
Drawing on the reasoning above, we assume that the designed to provide feedback on students’ written argumentation
argumentation voice exhibited in student essays is a direct window voice, focusing specifically on the usages of hedging, stancing,
to students’ reasoning. This reasoning is externalized, in written logical connectors and coherence. Students submit a number of
form, through the way students formulate a claim (premise/thesis drafts (the number to be set by the instructor) and the performance
statement), how they elaborate on that premise, how they hedge, of the students is visualized in a set of color coded gauges, heat-
take a stance, and the logical connections they adopt in their essays. maps and graphs that provide students with feedback on the aspects
We further assume that in the process of taking the argument from of their argumentation that require improvements (see Figure 1).
an initial draft to writing the final product that will be submitted for
summative assessment, the students would have engaged in many
meta-cognitive aspects related to written argumentation.
To enable the students to engage in the cognitive and the meta-
cognitive aspects of learning to argue (in written form), there are a
set of pedagogical requirements that need to be met by the
scaffolding process-es, enabled through learning technology. These
requirements which we derive from the literature of metacognition
for learning [12]; [47]; [49]; [7] include following: (i) learning
technology functionalities that help students monitor their own
thinking process, (ii) internalize self-monitoring techniques, and
(iii) develop higher order cognitive processing techniques (through
asking higher order questions) [12]; [47].
4 Technology-enabled Scaffolding of Written
Argumentation Voice
The past decades have witnessed an increase in studies that
investigate students’ argumentation skills in educational contexts
and how these might be enhanced [38]; [27]; [41]; [28]; [42]. As
Scheuer et al. [33] observes, (automated) support for learning
argumentation is missing from most formal courses. To address this
gap, many technology and learning scientists embarked on the Figure 1: Dashboard of feedback for students
exploration of different technology designs to support aspects of
representing argumentation to simulate and diagnose reasoning The dashboard also provides feedback on students’ performance
[42]; [40]; [44]; [10]; [43], and to support conversational on aspects of their argumentation across different drafts of their
argumentation [35]; [39]. This has led to the development of a essays is also displayed. (see Figure 1). Instructors can use the
number of technologies that are designed to improve learning software to view the submissions and the performance of a
2
particular student, and/or a of group of students, and they can see overview of the metacognitive scaffolding strategies we employed
which aspects that students commonly struggle with in terms of in MindWare is provided in table 1.
mastering the components of the argumentation voice, and as such
can design pedagogical intervention accordingly. Instructors are Table 1: Metacognitive scaffolding strategies in MindWare
able to do this through having access to a dashboard that provides
the instructors with an overview of different aspects of Metacognitive scaffolding MindWare Interface
argumentation in students’ essays. For example, in Figure 2, the
Monitoring the use of Learning analytics dashboards,
heat map provides an overview of the areas of argumentation that
hedging, stancing and including information about:
the class is struggling with. The heat map with areas colored in
logical connections differences across drafts of an
yellow and red indicates aspects of written argumentation that some
Evaluating the use of essay, feedback on specific
of the students in that course section are struggling with, and which
hedging, stancing and aspects of the argumentation
requires the pedagogical attention of the instructor.
logical connections voice, highlighting of relevant
Revising the use of text passages within the drafts
hedging, stancing and of the essays.
logical connections
To gauge the impact of MindWare, in particular its ability to
enable metacognitive scaffolding and support the use of
argumentation voice, we conducted a small-scale pilot in a first-
year science course. Our pilot was run in two course sections of the
same course. Each section had 25 volunteering students, and with
students having the option to pull out of the study at any time
when/if they want. Data collection was carried out in three stages
and data of students who did not complete all the three stages was
discarded.
In stage one, students responded to a pre-task survey, gauging
Figure 2: (Partial view of a ) dashboard for the instructor their familiarity with the investigated concepts (hedging, stancing
and logical connections), and the confidence level in using such
In terms of the computational model, MindWare is equipped components. Only after completing stage one, students were
with Natural Language Processing and Machine Learning modules granted access to MindWare. In this stage, they were invited by the
that analyze and weigh the usage of the components of an course instructors to submit a maximum of five drafts of their
argumentation voice, viz., the balanced use of stancing, hedging, written essays, and explore the software, including receiving
logical connections, and coherence. For example, MindWare can feedback before submitting the final version to the instructor for
identify and evaluate the degree of stancing in an essay [10]. That final assessment and grading. In this process, students were granted
is, whether the writer is arguing for a specific stance. In contrast to access to interact with an artificial agent to ask questions about
describing, stancing is used to express one’s position. When writers different aspects of written argumentation and get automated
take a stance, they not only express factual information but they feedback. In this stage of the pilot, 26 out of 50 students worked
also indicate their commitment with regard to what they said/wrote. consistently in MindWare environment. This stage lasted for two
The presence (or the lack thereof) of the components of the weeks. After submitting the final version of their essay to the
argumentation voices of stancing, hedging and logical connections instructors, in stage three, students were asked to respond to a set
can shape the reader’s opinion of the writer and of their argument of survey questions to reflect on their learner experience and
in such a way that succeeds (or fails) to convey an adequate specifically their perceptions about their own performance
epistemic vigilance on the part of the writer. regarding the usage of the components of the argumentation voices
in their written scientific essays. Of the entire cohort of 56 students,
5 Gauging the Impact 54 participated in stage 1, 26 participated in stage 2 and 19
responded to the post-task survey.
In this study, we piloted MindWare with the aim of supporting On a scale of 1 to 10, students were asked to rate their familiarity
the metacognitive processes that underlie learning aspects of with the indispensable components of the argumentation voices of
written argumentation in the context of a first-year science course. hedging, stancing and logical connections in an essay. The left part
Part of our scaffolding strategies were planned in advance and in Figure 3 provides an overview of the pre-task survey responses.
focused on enabling and supporting the learning of the aspects of In the pre-task survey responses, only 15% of the students indicated
written argumentation, aspects that are crucial for establishing an that they are familiar to very familiar with the components of the
argumentation voice in an essay as they are inherent in the exercise argumentation voice of hedging, stancing and logical connection.
of epistemic vigilance within a written text [6]. This includes the After two weeks scaffolding through the use of MindWare, 51% of
(balanced) use of hedging, stancing, logical connections and the students reported that they were very familiar with how to use
coherence as indispensable components of an argumentation voice. the components of the argumentation voice in written essay.
The AI-based machines in MindWare weigh the usage of these
features in an essay and provide feedback (in visual and numerical
form) to the learner. Other parts of the scaffolding in MindWare are
provided dynamically, based on the response of the student, and
such scaffolding is supported by an automatic feedback. An
3
Anonymous Author(s)
Before scaffolding in After scaffolding in out an extensive analysis to address and report on these pending
MindWare MindWare aspects of our research into the interplay between the use of AI and
NLP-informed learning technology, (meta)cognitive scaffolding,
and learning of written scientific argumentation.
Acknowledgments
We gratefully acknowledge the financial support for this project,
provided by: (a) UBC’s TLEF innovation grant (project grant:
22G36907) and (b) by the Science Centre for Learning and
Teaching (Skylight) at the UBC’s Faculty of Science. We are
grateful also to Scie113 students and instructors for participating in
this research.
Figure 3: Familiarity of the students with the components of References
the argumentation voice (pre-task and post-task responses).
[1] Azevedo, R., Guthrie, J. T., & Seibert, D. 2004. The role of self-regulated
Likewise, we observed that the confidence of the students in learning in fostering students’ conceptual understanding of complex systems
with hypermedia. Journal of Educational Computing Research, 30, 87-111.
using the components of the argumentation voices in their essays [2] Azevedo, R., & Hadwin, A. F. 2005. Scaffolding self-regulated learning and
increased. In the pre-task survey, 17.33% of the students metacognition—implications for the design of computer-based scaffolds.
reported that they were confident to very confident in using the Instructional Science, 33(5–6), 367–379.
[3] Buckingham Shum et al. 2017. Towards reflective writing analytics: Rationale,
components of the argumentation voice in their essays. methodology, and preliminary results. Journal of Learning Analytics, 4(1), 58–
Compared to the pre-task survey, in the post-task survey, 53% 84.
of the students reported that they become very confident in using [4] Burstein, J. 2003. The E-rater® scoring engine: Automated essay scoring with
the components of the argumentation voice in their written natural language processing. Lawrence Erlbaum Associates Publishers.
[5] Birol, Gülnur, et al. 2013. Research and Teaching: Impact of a First-Year
essays, after two weeks of technology-enabled scaffolding in the Seminar in Science on Student Writing and Argumentation. In Journal of
post-task survey. College Science Teaching, 043(01).
[6] Bermejo-Luque, L. 2011. Giving Reasons: A Linguistic-Pragmatic Approach
to Argumentation Theory. Argumentation Library, vol. 20. Dordrecht:
Before scaffolding in After scaffolding in Springer.
MindWare MindWare [7] Brown, A. L. 1987. Metacognition, executive control, self-regulation, and
other more mysterious mechanisms. Hillsdale, NJ: Lawrence Erlbaum.
[8] Conati C. and Maclaren H. 2009. Empirically Building and Evaluating a
Probabilistic Model of User Affect. User Modeling and User-Adapted
Interaction, 19, 267-303
[9] de Groot, R. et al. 2007. Computer supported moderation of e-discussions: The
ARGUNAUT approach. In C. Chinn, G. Erkens & S. Puntambekar (Eds.),
Mice, minds, and society—The computer supported collaborative learning
(CSCL) Conference 2007, (pp. 165–167). International Society of the
Learning Sciences.
[10] Elouazizi, Noureddine, et al. 2017. Automated analysis of aspects of written
argumentation. In LAK '17 Proceedings of the Seventh International Learning
Analytics & Knowledge Conference. (pp. 606-607). The Association for
Computing Machinery. DOI: http://dx.doi.org/10.1145/3027385.3029484.
[11] Foltz, P.W., S. Gilliam, and S. Kendall. 2000. Supporting content-based
Figure 4: Confidence of the students in using the feedback in online writing evaluation with LSA. Interactive Learning
components of the argumentation voice (pre-task and post- Environments, vol. 8(2), pp. 111–129.
[12] Flavell, J. H. 1979. Metacognition and cognitive monitoring: A new area of
task responses) cognitive-developmental inquiry. American Psychologist, 34(10), 906–911.
[13] Florou, Eirini et al. 2013. Argument extraction for supporting public policy
Overall, it seems that students’ familiarity with the components formulation. In Proceedings of the 7th Workshop on Language Technology for
of the argumentation voice in their written essays and their Cultural Heritage, Social Sciences, and Humanities, pages 49–54, Sofia,
Bulgaria, August. Association for Computational Linguistics.
confidence in using such components increased after using the [14] Ge, X., & Land, S. M. 2004. A conceptual framework for scaffolding ill-
meta-cognitive scaffolding strategies, as enabled through structured problem-solving processes using question prompts and peer
MindWare. interactions. Educational Technology Research and Development, 52(2), 5-22.
[15] Gibson, A., & Kitto, K. 2015. Analysing reflective text for learning analytics:
An approach using anomaly recontextualisation. Proceedings of the 5th
6 Conclusion International Conference on Learning Analytics and Knowledge (LAK ʼ15),
16–20 March 2015, Poughkeepsie, NY, USA (pp. 275–279). New York:
As indicative as this early stages data overview may seem, it is ACM.
neither conclusive, nor comprehensive. It is necessary to carry an [16] Gibson, A., Kitto, K., & Bruza, P. 2016. Towards the discovery of learner
metacognition from reflective writing. Journal of Learning Analytics, 3(2),
extensive analysis of how the specific components of the 22–36.
argumentation voice have evolved or devolved across the drafts of [17] Herrenkohl, L. and Guerra, M. 1998. Participant structures, scientific
the essays the students have submitted to MindWare. Moreover, we discourse, and student engagement in fourth grade. Cognition and Instruction,
need to analyze the significance, if any, of the changes in the grades 16(4), 431-473.
[18] Jermann, Patrick and Pierre Dillenbourg. 2003. Elaborating new arguments
of the students within the experimental group, and compare the through a CSCL scenario. In J. Andriessen, M. Baker & D. Suthers. Arguing
results to those of a control group of students, a course section that to Learn: Confronting Cognitions in Computer Supported Collaborative
did not participate in the pilot study, using MindWare to scaffold Learning environments. CSCL Series, vol.1. Amsterdam: Kluwer.
[19] Klebanov, B., B., et al. 2016. Argumentation: Content, structure, and
aspects of written argumentation. In future work, we plan to carry relationship with essay quality. In Proceedings of the Third Workshop on
4
Argument Mining (ArgMining 2016), pages 70–75. Association for [43] Woolf, B. P., et al. 2005. Critical thinking environments for science education.
Computational Linguistics. In C. K. Looi, G. McCalla, B. Bredeweg, & J. Breuker (Eds.), Proceedings of
[20] Kakkonen, T., Myller, N., and Sutinen, E. 2006. Applying Part-Of-Speech the 12th International Conference on Artificial Intelligence and Education
Enhanced LSA to Automatic Essay Grading. In Proceedings of the 4th IEEE (AI-ED 2005) (pp. 702–709). Amsterdam: IOS.
International Conference on Information Technology: Research and [44] Wiemer-Hastings, P., and Zipitria, I. 2001. Rules for Syntax, Vectors for
Education (ITRE 2006). Semantics. In Proceedings of the 23rd Annual Conference of the Cognitive
[21] Landauer, T. K., et al. 1997. How well can passage meaning be derived Science Society, Mahwah, NJ. Erlbaum.
without using word order? A comparison of Latent Semantic Analysis and [45] Wyner, Adam, et al. 2010. Semantic processing of legal texts. In Approaches
humans. In M. G. Shafto & P. Langley (Eds.), Proceedings of the 19th annual to Text Mining Arguments from Legal Cases, pages 60–79. SpringerVerlag,
meeting of the Cognitive Science Society (pp. 412-417). Mawhwah, NJ: Berlin, Heidelberg.
Erlbaum. [46] Wiebe, J., T. Wilson, R. Bruce, M. Bell, and M. Martin. 2004. Learning
[22] Linn, M. C., Bell, P., & Hsi, S. 1998. Using the Internet to enhance student Subjective Language. Computational Linguistics, 30(3).
understanding of science: The knowledge integration environment. Interactive [47] Winne, P. H. 2011. A cognitive and metacognitive analysis of self-regulated
Learning Environments, 6(1–2), 4–38. learning. In B. J. Zimmerman and D. H. Schunk (Eds.), Handbook of self-
[23] Moens, M-F, et al. 2007. Automatic detection of arguments in legal texts. In regulation of learning and performance (pp. 15-32). New York: Routledge.
ICAIL ’07: Proceedings of the 11th International Conference on Artificial [48] Wood, D., Bruner, J., & Ross, G. 1976. The role of tutoring in problem solving.
Intelligence and Law, pages 225–230, New York, NY, USA, 2007. ACM Journal of Child Psychology and Psychiatry, 17, 89–100.
Press. [49] Zimmerman, B. J., & Schunk, D. H. (Eds.). 2011. Handbook of self-regulation
[24] Mochales, R. and M.-F. Moens. 2008. Study on the Structure of of learning and performance. NY: Routledge.
Argumentation in Case Law. In Legal Knowledge and Information Systems.
Jurix 2008. IOS Press.
[25] McAlister, S., Ravenscroft, A., & Scanlon, E. 2004. Combining interaction and
context design to support collaborative argumentation using a tool for
synchronous CMC. Journal of Computer Assisted Learning: Special Issue:
Developing Dialogue for Learning, 20(3), 194–204.
[26] Mayer, R. E. 1996. Learning strategies for making sense out of expository text:
the SOI model for guiding three cognitive processes in knowledge
construction. Educational Psychology Review, 8, 357–371.
[27] Persing, I. and Vincent Ng. 2015. Modeling argument strength in student
essays. In Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics and the 7th International Joint Conference on
Natural Language Processing (Volume 1: Long Papers), pages 543–552.
Association for Computational Linguistics.
[28] Ranney, M., and Schank, P. 1998. Toward an integration of the social and the
scientific: Observing, modeling, and promoting the explanatory coherence of
reasoning. In S. Read & L. Miller (Eds.), Connectionist models of social
reasoning and behavior (pp. 245-274). Mahwah, NJ: Lawrence Erlbaum.
[29] Roll, I., Holmes, N. G., Day, J., & Bonn, D. 2012. Evaluating metacognitive
scaffolding in guided invention activities. Instructional Science, 40, 691-710.
[30] Suthers, D. D. 2001. Architectures for computer supported collaborative
learning. In Proceedings of the IEEE International Conference on Advanced
Learning Technologies (ICALT 2001) (pp. 25–28), Madison.
[31] Suthers, D. D., et al. 2008. Beyond threaded discussion: Representational
guidance in asynchronous collaborative learning environments. Computers &
Education, 50(4), 1103–1127.
[32] Schwarz, B. B., and Glassner, A. 2007. The role of floor control and of
ontology in argumentative activities with discussion-based tools. International
Journal of Computer-Supported Collaborative Learning (ijCSCL), 2(4), 449–
478.
[33] Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. 2010. Computer-
Supported Argumentation: A Review of the State-of-the-Art. International
Journal of CSCL, 5(1): 43-102.
[34] Song, Yi, et al. 2014. Applying argumentation schemes for essay scoring. In
Proceedings of the First Workshop on Argumentation Mining, pages 69– 78.
Association for Computational Linguistics.
[35] Sumsion, J., & Fleet, A. 1996. Reflection: Can we assess it? Should we assess
it? Assessment & Evaluation in Higher Education, 21(2), 121–130.
[36] Teufel, S and M. Moens. 1999a. Discourse-level argumentation in scientific
articles: human and automatic annotation. In Towards Standards and Tools for
Discourse Tagging. ACL 1999 Workshop.
[37] Ullmann, T. D., Wild, F., & Scott, P. 2012. Comparing automatically detected
reflective texts with human judgements. Proceedings of the 2nd Workshop on
Awareness and Reflection in Technology-Enhanced Learning (AR-TEL ʼ12),
18 September 2013, Saarbrucken, Germany (pp. 101–116).
[38] van Gelder, T. 2002. Argument mapping with Reasonable. The American
Philosophical Association Newsletter on Philosophy and Computers, 2(1), 85–
90.
[39] van Gelder, T. 2003. Enhancing deliberation through computer-supported
argument visualization. In P. A. van Eemeren, F. H., & Grootendorst, R. A
systematic theory of argumentation: The Pragma-Dialectical Approach.
Cambridge: Cambridge University Press.
[40] van den Braak, S., & Vreeswijk, G. 2006. AVER: Argument visualization for
evidential reasoning. In T. M. van Engers (Ed.), Proceedings of the 19th
Conference on Legal Knowledge and Information Systems (JURIX 2006) (pp.
151–156). Amsterdam: IOS.
[41] Verheij, B. 2003. Artificial argument assistants for defeasible argumentation.
Artificial Intelligence, 150(1–2), 291–324.
[42] Walton, D., Chris Reed, and Fabrizio Macagno. 2008. Argumentation
schemes. New York, NY: Cambridge University Press.
5