=Paper= {{Paper |id=Vol-1733/paper-07 |storemode=property |title=Applications of Large Displays: Advancing User Support in Large Scale Ontology Alignment |pdfUrl=https://ceur-ws.org/Vol-1733/paper-07.pdf |volume=Vol-1733 |authors=Valentina Ivanova |dblpUrl=https://dblp.org/rec/conf/semweb/Ivanova16 }} ==Applications of Large Displays: Advancing User Support in Large Scale Ontology Alignment== https://ceur-ws.org/Vol-1733/paper-07.pdf
         Applications of Large Displays: Advancing User
          Support in Large Scale Ontology Alignment

                                       Valentina Ivanova

                               valentina.ivanova@liu.se
                            Linköping University, Linköping, Sweden



         Abstract. Producing alignments of the highest quality requires ‘humans in the
         loop’, however, user involvement is currently one of the challenges for the on-
         tology alignment community. Ontology alignment is a cognitively intensive task
         and could be efficiently supported by user interfaces encompassing well-designed
         visualizations and interaction techniques. This work investigates the application
         of large, high-resolution displays to improve users’ cognitive support and identi-
         fies several promising directions for their application—improving ontologies’ and
         alignments’ navigation, supporting users’ thinking process and collaboration.


1      Problem Statement
The growth of the ontology alignment area in the past ten years led to the development
of many ontology alignment tools and a platform for their annual evaluation—OAEI1 .
In most cases ontology alignment systems apply fully automated approaches where an
alignment, i.e., a set of mappings between two ontologies, is generated without any hu-
man intervention. Such approaches are only the first step in alignments generation [14]
since advancing the algorithms did not lead to comparable improvements in the align-
ments’ quality [18]. Involving users will lead to a greater improvement in the align-
ments’ quality than developing more accurate algorithms [6]. Simulating user input by
an (all-knowing) oracle led to the improvement of the alignments’ quality in comparison
to the fully automated approaches [9, 12, 22]. Explanation of matching results to users,
fostering the user involvement in the process and social and collaborative matching are,
however, still challenges for the community [31]. Another challenge is the evaluation
of the quality and the effectiveness of user involvement [14, 15].
    Graphical interfaces are essential to support users during ontology alignment, how-
ever, many systems do not provide such [31]. Furthermore, only about one third of
the systems participating in the OAEI have any. Ontology alignment involves work-
ing simultaneously with at least two ontologies and often a large number of calculated
mappings. This leads to issues related not only to the meaningful representation and
navigation but also to the cognitive load during the alignment process. The demand for
user interfaces is even more pressing given the trend towards growing size and com-
plexity of the ontologies and the alignments.
    Recently, with the development of technology and the associated cost reduction,
large, high-resolution displays became available at affordable prices. It has been pointed
 1
     http://oaei.ontologymatching.org/—Ontology Alignment Evaluation Initiative (OAEI)
out that ‘when a display exceeds a certain size, it becomes qualitatively different’. A
number of studies have shown improved performance and reduced cognitive load in
an everyday office environment due to more peripheral awareness, glancing instead
of windows switching to obtain additional information, flexibility in the organization
of the space, etc. Environments where large displays are present are well-suited for
activities involving several people where they can simultaneously work and discuss.
    This work aims to improve the alignments’ quality by addressing the challenge(s)
of (collaborative) user involvement. It will design and develop user interfaces and cor-
responding visualization and interaction techniques by taking advantage of the latest
technology developments. More specifically, it will investigate how to employ the extra
space provided by large, high-resolution displays in order to improve navigation in the
ontologies and alignments and provide a means to support the users’ thinking process.

2      Relevancy
As indicated by the initial 3Vs of Big Data—volume, velocity and variety—the amount
of data today is growing with unprecedented speed. Broadly speaking, ontology align-
ment addresses the problems of data and knowledge sharing and reuse by providing
techniques for integrating different data sources; it provides a means for interoperability
between semantically-enabled applications. Now, in the Big Data era, it provides tech-
niques to turn the data from distributed, heterogeneous datasets into valuable knowledge
for their owners. The user interfaces, that will result from this work, will support the
‘humans in the loop’ during the knowledge intensive alignment process.
     Moreover, the potential benefits from improving the alignments’ quality will spread
over all domains and applications that demand alignments and more importantly to
domains and settings where alignments of highest quality are vital. One example is the
biomedical domain where compromises with the alignments’ quality are unacceptable.
It is one of the earliest adopters of Semantic Web techniques and there are already
initiatives addressing the demand for mappings, e.g., some OAEI tracks, Bioportal2 and
recently the Pistoia Alliance3 Ontology Mapping project.
     In addition, showcasing the benefits from large displays will likely lead to their
application in other ontology engineering areas as well.

3      Related Work
To the best of my knowledge there are no works that address ontology alignment in a
large, high-resolution display setting. Ontology alignment systems with user interfaces
do exist, some of the OAEI tools provide visual interfaces as well, but they only consider
regular visualization and interaction settings, e.g., desktop and mouse. Tools’ interfaces
often resulted from the need to provide user input to matchers [15], and functionality
and usability issues with them exist (recent reviews in [12, 20]). They are rarely theoret-
ically grounded and not based on advances in cognitive theories (except [16]). Earlier
evaluations can be found in [15, 16, 18, 24].
 2
     http://bioportal.bioontology.org/—repository of biomedical ontologies
 3
     http://www.pistoiaalliance.org/—a non-profit alliance of life science companies
    This section presents work in connection to several aspects since there is no work
considering ontology alignment in a large display setting. First, subsection 3.1 provides
some considerations for ontology alignment and identifies opportunities for the applica-
tion of large displays in its context—improving ontologies’ and alignments’ navigation,
space to support users’ thinking process and collaboration. Each of the following sub-
sections (3.2, 3.3, 3.4) focuses on one of the opportunities and presents findings from
relevant fields in support for it.


3.1   Ontology Alignment (some considerations)

Ontology alignment is a complex and challenging task imposing significant cognitive
demands on the users [15, 16]. Users are most often involved in selecting matchers
and configuring combination strategies, validating automatically generated mappings,
etc. The alignment process usually involves the user exploring both (unfamiliar) on-
tologies in order to become familiar with them and their formal representations and to
understand their modelers’ view of the domain. Further, the user needs to explore the
mappings computed by the tool’s algorithms in order to determine their correctness and
create mappings missed by the system. It is an inherently error-prone process due to
different levels of users’ domain and knowledge representation expertise, experience,
human biases, misinterpretations, etc.
    The tasks above demand extensive navigation in both ontologies and their align-
ment. Depending on their visual representation, navigation may involve panning, zoom-
ing, scrolling, collapsing/expanding nodes, etc. and could result in users getting lost and
disoriented; disorientation was, indeed, reported by Protégé users in [32]. Interactive
navigation and ontology exploration are discussed as part of a cognitive support frame-
work for ontology alignment [16]. The navigation within the alignment was improved
in a large schema mapping tool [29] but it is not clear how it impacted the understand-
ing of the ontologies and alignments. At the ontology modeling field, navigation was
outlined as one of the areas that demands cognitive support [13]. Navigation and related
behaviors was studied in [11] and requirements for cognitive support were devised.
    Navigating and exploring ontologies serves to inspect and compare mappings, con-
cept definitions and contexts, etc. for various purposes, e.g., to decide if a mapping is
correct or if there is a better representation of the relationship between two concepts.
To do so the user switches between views and windows while holding and process-
ing necessary information in working memory which has limited capacity and duration
(3±1 items and 10–15 sec. without rehearsal). Activities as above could effectively
be supported by extra display space through simultaneously accommodating multiple
(connected) representations to reduce the memory load. Decision making strategies in-
volving comparisons are discussed in [16]. Other reasons for comparing and contrast-
ing activities include revising previous decisions, their reasons and state of the process
at that time, simulating, exploring and evaluating the consequences from a validation,
identifying and resolving conflicts, etc. These tasks become even more important and
information demanding when the alignment happens over a long period or in a collabo-
rative setting. Exploration and comparison are also necessary for evaluating matchers’
performance especially if no reference alignment is available and can also serve for the
purposes of identifying potential errors [4]. Verification discussed for ontology model-
ing [13] is also relevant to ontology alignment.
     (Large-scale) Ontology alignment, similarly to ontology development, is hardly a
single person task and social and collaborative matching is one of the challenges iden-
tified in [31] and a requirement from [20]. It is unlikely that one person possesses the
domain knowledge needed to map all parts of the ontologies. Several people working
together can discuss doubtful mappings and potentially reduce errors in the alignment.
There is, however, little work done in this direction.


3.2   Navigation in Digital Environments

Spatial navigation is a result of complex interaction of cognitive processes [36]. Navi-
gation in digital information spaces in connection to humans’ spatial abilities has been a
subject of earlier studies outside of the context of large displays. In a number of studies
participants with higher spatial abilities were more efficient in information seeking tasks
in a large file system, an online environment, a modified browser, an online shopping
database system, a hypermedia system, a command line interface. People with lower
spatial abilities got lost in a hierarchical file system, completed fewer tasks and ‘were
hesitant to explore large numbers of categories’. Authors have suggested [7] that higher
spatial abilities support the construction of better mental models of the system which are
further employed while searching and navigating it. Differences in performance can be
addressed by providing navigational aids, e.g., maps, that reduce the need for creating
a mental model.
    A significant part of the benefits provided by large displays are in connection to
humans’ spatial abilities. Large displays often replace virtual navigation by physical
‘thus allowing the user to exploit embodied human abilities such as spatial awareness,
proprioception, and spatial memory’ [3]. Experiments in a virtual world [33, 34] com-
pared a large, projected-wall display and a standard desktop display (with equivalent
content). In the wall condition the users adopted more efficient cognitive strategies for
egocentric tasks and performed better in path integration [34]; mental rotation, 3D nav-
igational tasks, mental map formation and memory [33]. The effect of the display size
on performance seemed independent of other influencing factors, e.g., interactivity and
mental aid (e.g., landmarks). The performance for navigational tasks in large display
conditions with more environmental cues (higher resolution or wider field of view) was
improved for both males and females [10, 26]. Significant effect on display size and
task complexity was shown in an abstract data manipulation task [25], and for low level
visualization and navigation tasks, e.g., finding and comparing very detailed data [5].


3.3   Using Space to Think by Using Vision to Think

One promising application of display space is to use the ‘space to think’ [2] by ‘using
vision to think’ [8]. Visual representations serve to offload work to the perceptual sys-
tem and expand working memory storage and processing capacity [8]. High correlation
between working memory capacity and general reasoning abilities in large-scale studies
was demonstrated in [1], but it is not yet well understood if the storage or processing
capabilities (or both) account for the performance differences. Both storage and pro-
cessing capabilities are addressed by the extra space of large displays. Instead of views’
switching, it could accommodate multiple representations simultaneously, thus reduc-
ing the memory load. In the field of software comprehension tools, two of the principles
in [35] consider freeing working memory by employing artifacts from the environment
and choose easier to comprehend (by the humans’ perceptual system) representations.
    Large displays provide more space for process and sensemaking [3]. A series of 11
data-analysis workshops conducted on a six-meter white board provided examples of
the anticipated usage of additional space by domain experts in various domains [23]
including ontology alignment—use the space for persistent views of the data; show
multiple views side-by-side; spread data to enable easier selection and modification;
support ‘trail of thoughts’, enable backtracking and exploration of possibilities by vi-
sually depicting earlier steps. Multiple coordinated views on a wall-sized display led
to different quality of insights attributed to reducing the view switching distractions
and staying longer in ‘insight-generating mental states’ [28]. In the context of ontology
alignment, users can be supported during the comparison and contrasting activities by
presenting information persistently (instead of keeping it in memory while switching
between views) and offloading part of the processing to the perceptual systems (instead
of the memory). Comparing shape and color of simple objects was faster and caused
fewer errors when information was presented simultaneously instead of on different
zoom levels [27] or views. Since the visits between two juxtaposed views are cheaper
(in comparison to zooming) more visits were made (which could contribute for fewer
errors) [27].
    Additionally, multiple views allow for balancing the advantages and disadvantages
of different visual representations [21]. For large, heterogeneous datasets presenting the
different aspects of the data ‘may benefit user cognition’ [21] and, as also shown in the
context of ontology alignment [17], is suitable for different tasks. Different representa-
tions of the same data influence task efficiency and complexity and could even affect
decision-making strategies [37].

3.4   Collaboration
Large displays naturally support two behaviors observed in a collocated collaborative
setting: territoriality—separating the space into personal, group, and storage space, and
fluidly changing collaboration styles ranging from loosely coupled to closely coupled
interaction. In contrast to desktop settings, they support several people working in par-
allel on different parts of the workspace by providing multiple simultaneous inputs and
enough space to accommodate several (copies of) representations. Mutual awareness
and background information have been identified as factors contributing to the success
of collocated settings; they help in communication and coordination between people
and are also supported by extra space. Recently, due to geographically distributed teams,
mixed-presence settings have gained attention [30]. In such settings a shared workspace
is created by connecting large remote displays where people interact with artefacts and
can observe each other’s actions as if they were working in a collocated setting. Some
awareness mechanisms (hindered in a regular remote setting) such as territoriality and
view orientation are supported in a mixed-presence setting due to the extra space [30].
4     Research Questions & Hypotheses
The research questions (R) and related hypotheses (H) explore different opportunities
in which the extra space will lead to improvements in the alignments’ quality and po-
tentially speed up the ontology alignment process:
    – R1: Are there benefits from applying large displays to ontology alignment for indi-
      vidual users and how to design and build such tools?
        • R1.1: Would the use of large displays help users in acquiring better understand-
          ing of the ontologies and alignments and how?
          H1.1: Users will acquire better and/or faster understanding of the ontologies
          and alignments due to improved navigation within them.
        • R1.2: Would the use of large displays support users during the ontology align-
          ment process and how?
          H1.2: Externalizing and supporting the thinking process by simultaneously
          providing multiple (connected) views will allow users to offload some of the
          cognitive processes to the perceptual system thus reducing their cognitive load.
    – R2: Are there benefits from applying large displays to ontology alignment in a
      collaborative setting and how to design and build such tools?
      H2: Collaboration in collocated and mixed-presence settings will be more effi-
      ciently supported due to the additional space.


5     Preliminary Results
Preliminary results in connection to the hypotheses above have not been obtained yet.
Preliminary work to develop requirements for user support in large scale ontology
alignment has started and consisted of user and literature evaluations of state-of-the-art
systems [12, 20]. Other authors have conducted a number of workshops, including an
alignment workshop, for studying interaction techniques for large displays [23]. Their
study provides examples of how domain experts envision the usage of extra space for
ontology alignment and an evidence for the practicability of H1.2.


6     Approach & Evaluation Plan
Further review of related literature will be performed to deepen my understanding in
the areas covered by the hypotheses and to identify suitable visualization and inter-
action techniques. Depending on the hypothesis this literature will be selected from:
navigation in complex digital environments in the context of software and knowledge
engineering, schema matching, collaborative ontology engineering, design guidelines
for collaborative environments, interaction techniques for large displays, etc. Cogni-
tive task analysis or alternatively ‘cheaper’ cognitive walkthroughs with related/existing
systems (in connection to the tasks and requirements identified in [16, 20]) will be con-
ducted to envision places for introducing multiple views. This will result in the design
and implementation of a user interface for an ontology alignment system taking advan-
tage of the extra space available on a large, high-resolution display.
    Conducting user studies with domain expert and novice users in both laboratory and
everyday setting would be very beneficial during the design phase and it is necessary
for evaluation of the resulting user interfaces. The experiments will necessarily cover at
least the following conditions size(small, large) x interface(H1.1:with/without naviga-
tional aids; H1.2:with/without multiple views). A combination of measures will be used
for evaluation—performance metrics such as response time and accuracy, think-aloud
protocols, collecting activity logs and self-reported metrics (NASA-TLX often used for
cognitive load and SUS for usability; secondary task response time for extraneous cog-
nitive load). Precision, recall and F-measure will be used to measure the impact on the
alignments’ quality. Collaborative features could be evaluated with a heuristic evalua-
tion for groupware. Depending on the resources, collaborative sessions and interviews
with the sessions’ participants could be conducted and analyzed.


7    Reflections
My previous work in the area of ontology alignment, e.g., [12, 19, 20], together with
analysis of related work in the field have provided understanding of the issues during
the process. To address them, a review of literature from a broad range of fields has
been conducted starting from cognitive psychology basics, navigation in digital envi-
ronments, software and knowledge engineering, collaborative environments design, etc.
and several promising directions were identified in the hypotheses.
Acknowledgments. I am grateful to my supervisor Prof. Patrick Lambrix. This work
has been financially supported by SeRC, CUGS and the EU FP7 project VALCRI.


References
 1. Working-memory capacity explains reasoning abilityand a little bit more. J. Intelligence,
    30:261 – 288, 2002.
 2. C Andrews, A Endert, and C North. Space to think: Large high-resolution displays for
    sensemaking. In CHI 2010, pages 55–64, 2010.
 3. C Andrews, A Endert, B Yost, and C North. Information visualization on large, high-
    resolution displays: Issues, challenges, and opportunities. J. Info. Vis., 10:341–355, 2011.
 4. J Aurisano, A Nanavaty, and I Cruz. Visual analytics for ontology matching using multi-
    linked views. In VOILA 2015, pages 25–36, 2015.
 5. R Ball and C North. Effects of tiled high-resolution display on basic visualization and navi-
    gation tasks. CHI EA 2005, pages 1196–1199, 2005.
 6. P A Bernstein and S Melnik. Model Management 2.0: Manipulating Richer Mappings. In
    ACM SIGMOD Int. Conf. on Management of data, pages 1–12, 2007.
 7. F R Campagnoni and K Ehrlich. Information retrieval using a hypertext-based help system.
    ACM Trans. Inf. Syst., 7(3):271–291, 1989.
 8. S K Card et al., editors. Readings in Information Visualization: Using Vision to Think. 1999.
 9. M Cheatham et al. Results of the oaei 2015. In OM 2015, pages 60–115, 2015.
10. M Czerwinski, D S Tan, and G G Robertson. Women take a wider view. CHI 2002, pages
    195–202, 2002.
11. T d’Entremont and M-A Storey. Using a degree of interest model to facilitate ontology
    navigation. In IEEE VL/HCC 2009, pages 127–131, 2009.
12. Z Dragisic, V Ivanova, P Lambrix, D Faria, E Jiménez-Ruiz, and C Pesquita. User validation
    in ontology alignment. In ISWC 2016, to appear.
13. N A Ernst, M-A Storey, and P Allen. Cognitive support for ontology modeling. Int. J.
    Hum.-Comput. Stud., 62:553–577, 2005.
14. J Euzenat et al. Ontology alignment evaluation initiative: six years of experience. J Data
    Seman., XV:158–192, 2011.
15. S Falconer and N Noy. Interactive techniques to support ontology matching. In Z Bellahsene,
    A Bonifati, and E Rahm, editors, Schema Matching and Mapping, pages 29–51. 2011.
16. S Falconer and M-A Storey. A Cognitive Support Framework for Ontology Mapping. In
    ISWC/ASWC 2007, pages 114–127. 2007.
17. B Fu, N Noy, and M-A Storey. Eye tracking the user experience-an evaluation of ontology
    visualization techniques. J. Semantic Web, 2014.
18. M Granitzer et al. Ontology Alignment—A Survey with Focus on Visually Supported Semi-
    Automatic Techniques. J. Future Internet, pages 238–258, 2010.
19. V Ivanova and P Lambrix. A unified approach for aligning taxonomies and debugging tax-
    onomies and their alignments. In ESWC 2013, pages 1–15. 2013.
20. V Ivanova, P Lambrix, and J Åberg. Requirements for and evaluation of user support for
    large-scale ontology alignment. In ESWC 2015, pages 3–20. 2015.
21. W. Javed and N. Elmqvist. Exploring the design space of composite visualization. In Visu-
    alization Symposium (PacificVis), 2012 IEEE Pacific, pages 1–8, 2012.
22. E Jiménez-Ruiz, B Cuenca Grau, Y Zhou, and I Horrocks. Large-scale Interactive Ontology
    Matching: Algorithms and Implementation. In ECAI 2012, pages 444–449, 2012.
23. S Knudsen, M R Jakobsen, and K Hornbæk. An exploratory study of how abundant display
    space may support data analysis. NordiCHI 2012, pages 558–567, 2012.
24. P Lambrix and A Edberg. Evaluation of ontology merging tools in bioinformatics. In Pacific
    Symposium on Biocomputing, pages 589–600, 2003.
25. C Liu et al. Effects of display size and navigation type on a classification task. CHI 2014,
    pages 4147–4156, 2014.
26. T Ni, D A Bowman, and J Chen. Increased display size and resolution improve task perfor-
    mance in information-rich virtual environments. GI 2006, pages 139–146, 2006.
27. M D Plumlee and C Ware. Zooming versus multiple window interfaces: Cognitive costs of
    visual comparisons. ACM Trans. Comput.-Hum. Interact., 13:179–209, 2006.
28. K Reda, A E Johnson, M E Papka, and J Leigh. Effects of display size and resolution on user
    behavior and insight acquisition in visual exploration. CHI 2015, pages 2759–2768, 2015.
29. G G Robertson, M P Czerwinski, and J E Churchill. Visualization of mappings between
    schemas. CHI 2005, pages 431–439, 2005.
30. P. Robinson and P. Tuddenham. Distributed tabletops: Supporting remote and mixed-
    presence tabletop collaboration. In TABLETOP 2007, pages 19–26, 2007.
31. P Shvaiko and J Euzenat. Ontology Matching: State of the Art and Future Challenges. J.
    Knowledge and Data Engineering, 25:158–176, 2013.
32. M-A Storey et al. Jambalaya: Interactive visualization to enhance ontology authoring and
    knowledge acquisition in protégé. In W. Inter. Tools for Know. Capture, 2001.
33. D S Tan, D Gergle, P Scupelli, and R Pausch. Physically large displays improve performance
    on spatial tasks. ACM Trans. Comput.-Hum. Interact., 13:71–99, 2006.
34. D S Tan, D Gergle, P G Scupelli, and R Pausch. Physically large displays improve path
    integration in 3d virtual navigation tasks. CHI 2004, pages 439–446, 2004.
35. A. Walenstein. Theory-based analysis of cognitive support in software comprehension tools.
    In W. Program Comprehension 2002, pages 75–84, 2002.
36. T Wolbers and M Hegarty. What determines our navigational abilities? J. Trends in Cog.
    Sci., 14:138–146, 2010.
37. J Zhang et al. Representations in distributed cognitive tasks. J. Cog. Sci., 18:87–122, 1994.