On Measuring Learning in Search: A Position Paper Luanne Freund Samuel Dodson Rick Kopak iSchool iSchool iSchool University of British Columbia University of British Columbia University of British Columbia Vancouver, BC Canada Vancouver, BC Canada Vancouver, BC Canada luanne.freund@ubc.ca samuel.dodson@alumni.ubc.ca r.kopak@ubc.ca ABSTRACT ing as Learning” workshops1 and associated publications [8, This position paper discusses approaches used to evaluate 9] and publications [8, 9]. However, there is a wide body of learning that results from searching and interacting with research in related research areas, including text comprehen- online content. sion and hypertext. Taken together, this prior work offers a range of approaches. Keywords 2.1 Models and Theories comprehension; evaluation; interactive search and retrieval; Several different models of comprehension and learning learning; measurement; semantic navigation are commonly referenced in work on searching as learning, with implications for measurement. 1. INTRODUCTION The Construction Integration (C-I) model [5] has been Research on search systems is shifting from an emphasis the basis for our own work in this area. It focuses on the on information seeking and retrieval to one of information cognitive process of comprehension during interaction with interaction and use. This is an outgrowth of changes in the content. This is represented as a two-step process. First, information landscape where full-text and multimedia infor- the reader creates nodes for all propositions in the text. mation objects in digital format are now readily available in These nodes form the textbase, within which there is a mi- systems that facilitate browsing and direct interaction with cro structure that deals with comprehension at the sentence these objects. While traditional assessment measures for and paragraph level, and a macro structure consisting of information seeking and retrieval have focused on effective- the global, overall meaning, or gist of the text [5]. This ness and efficiency in retrieving information objects, these distinction is important when evaluating comprehension, as are no longer sufficient in more immersive and interactive different tests of comprehension are sensitive to outcomes environments. at both the micro- and macro-levels. Our research has fo- Our research group has characterized a form of informa- cused on measurement at the macro-structural level as we tion interaction that takes place in online search environ- are most interested in the reader’s understanding of the over- ments as semantic navigation [6], focusing on the multi-level all meaning of the text. We have found variation in the abil- meaning-making and learning that takes place while moving ity of standard comprehension tests to measure at both the through hyperlinked digital environments. More recently, macro- and micro-levels. we have focused explicitly on the inter-connected processes Kuhlthau’s Information Search Process model [7] has of reading, comprehension, engagement, and learning in the been highly influential in information science. It is informed course of digital information interaction [2, 3]. In this posi- by a constructivist approach to learning and is insightful tion paper, we discuss some of the approaches that have been in that it portrays learning as a process characterized by used to evaluate learning as a key outcome of information distinct phases during the course of interacting with in- interaction and search. formation with associated changes in goals, activities and emotional states. Vakkari’s empirical work extended the model in the search domain by demonstrating that searchers’ 2. APPROACHES TO EVALUATION queries and relevance assessments reflect changes in their Past research that evaluates learning in the context of knowledge state as they search [10, 9]. searching is relatively rare [4, 11], but more recently, in- Bloom’s Taxonomy of Educational Objectives [1] has creased interest has been shown through a series of “Search- served as a framework for a number of recent search stud- ies [4, 11]. The Taxonomy identifies a set of progressively complex learning objectives that can be used to design or assess learning experiences. It offers a means of assessing the depth of learning that occurs through search, although it can be challenging to differentiate between categories and map evidence of learning to them. Search as Learning (SAL), July 21, 2016, Pisa, Italy 1 The copyright for this paper remains with its authors. Copying permitted The first “Searching as Learning” workshop was held at the for private and academic purposes. IIiX 2014 Conference (http://www.diigubc.ca/IIIXSAL). 2.2 Methods 3. NEXT STEPS Methods of assessing searching and learning are interdis- Drawing upon the range of models and methods outlined ciplinary and wide-ranging, and a lengthy review would be here, there is potential to develop and build consensus around required to provide an overview of them (e.g. [8, 9]). In this a standardized approach to the assessment of learning in position paper, we simply articulate the broader dimensions search, much as the interactive information retrieval com- of methods that have emerged in our own work. munity developed a standard approach to the design of ex- perimental search studies a decade ago. We look forward to 2.2.1 Pre- and Post-task vs. Process engaging with SAL workshop participants to move us closer There are two common approaches to assessing learning to this goal. outcomes of search. The first approach tends to rely on a post-task test or written summary, and may include a pre- 4. ACKNOWLEDGEMENTS task assessment of prior knowledge. We have relied primar- Research funding from the University of British Columbia ily upon this approach in our work to date, comparing learn- Hampton Fund is gratefully acknowledged, as are the con- ing outcomes resulting from different interaction environ- tributions of our colleague Heather O’Brien. ments. However, results can be difficult to interpret in the absence of interaction data. Process-oriented approaches, on the other hand, capture patterns of behavior and thus 5. REFERENCES can reveal the mechanisms by which learning occurs, such [1] B. S. Bloom, M. D. Engelhart, E. J. Furst, W. H. Hill, as spending more time in certain sections of documents, or and D. R. Krathwohl. Taxonomy of educational switching more frequently between documents [3]. objectives: The classification of educational goals. Handbook 1: Cognitive domain. David McKay, New 2.2.2 Duration York, NY, 1956. Typical online search interactions may only take a few [2] S. Dodson. Effects of field dependence-independence seconds or minutes and are not likely to involve significant and passive highlights on comprehension. Master’s learning on the part of the searcher. In fact, one of the thesis, University of British Columbia, Vancouver, arguments for considering learning as an important search Canada, 2016. outcome, is to acknowledge the value of “slow search” and [3] L. Freund, R. Kopak, and H. O’Brien. The effects of search tasks that carry over through multiple search sessions textual environment on reading comprehension. in contrast to the efficiency-based models that predominate Journal of Information Science, 42(1):79–93, 2016. in IR research. Therefore, methods for studying learning [4] B. J. Jansen, D. Booth, and B. Smith. Using the in search will require search tasks that prompt lengthier taxonomy of cognitive learning to model online searches with high degrees of interaction, multiple sessions, searching. Information Processing & Management, or longitudinal studies. This will allow for learning to be 45(6):643–663, 2009. assessed in real-time, as the search process unfolds, and as [5] W. Kintsch. Comprehension: A paradigm for an immediate and/or sustained outcome of searching. cognition. Cambridge University Press, Cambridge, 2.2.3 Customized vs. Generic England, 1998. [6] R. Kopak, L. Freund, and H. L. O’Brien. Supporting A major challenge in assessing learning is the dependency semantic navigation. In Proceedings of the Third between specific content, the prior knowledge of the searcher Symposium on Information Interaction in Context, and the learning that occurs. Most of the approaches to as- pages 359–364, 2010. sessing learning rely upon tests based on a small number of known content items, such as sets of articles or webpages. [7] C. Kuhlthau. Seeking meaning: A process approach to The custom development and validation of these instruments library and information services. Libraries Unlimited, is labour intensive and the method does not scale up for use Westport, CT, 2004. in search studies using large document collections. Alter- [8] S. Y. Rieh, K. Collins-Thompson, P. Hansen, and nate, more generic, methods require participants to produce H.-J. Lee. Towards searching as a learning process: A open-ended summaries or reports and assess those reports review of current perspectives and future directions. for evidence of learning. Journal of Information Science, 42(1):19–34, 2016. [9] P. Vakkari. Searching as learning: A systematization 2.3 Measures based on literature. Journal of Information Science, The simplest and most common measures of learning are 42(1):7–18, 2016. self-reported knowledge gain and tests of factual knowledge [10] P. Vakkari, M. Pennanen, and S. Serola. Changes of using multiple choice or true and false responses. However, search terms and tactics while writing a research such measures do not account for the complexity of learning proposal: A longitudinal case study. Information as a multi-stage and multi-level process. We have found dif- Processing & Management, 39(3):445–463, 2003. ferences between measures targeting micro and macro levels [11] M. J. Wilson and M. L. Wilson. A comparison of of comprehension from the C-I Model. Drawing upon in- techniques for measuring sensemaking and learning sights from Kuhlthau’s model and Bloom’s Taxonomy, we within participant-generated summaries. Journal of expect that it will be possible to develop even more sophis- the American Society for Information Science and ticated measures of learning. Technology, 64(2):291–306, 2013.