<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Design of a Perceptual-based Object Group Selection Technique</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Hoda</forename><surname>Dehmeshki</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">York University</orgName>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Wolfgang</forename><surname>Stuerzlinger</surname></persName>
							<email>wolfgang@cs.yorku.ca</email>
							<affiliation key="aff0">
								<orgName type="institution">York University</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Design of a Perceptual-based Object Group Selection Technique</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">560B139FF3DDDAC24B6A00191D2A4CF7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T04:33+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Selecting groups of objects is a common task in graphical user interfaces. Current selection techniques such as lasso and rectangle selection become time-consuming and error-prone in dense configurations or when the area covered by targets is large or hard to reach. This paper presents a new pen-based interaction technique that allows users to efficiently select perceptual groups formed by the Gestalt principle of good continuity.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Selecting groups of objects is a common task in graphical user interfaces and is required for many standard operations. Current selection techniques such as lasso and rectangle selection become time-consuming and error-prone in dense configurations or when the area covered by targets is large or hard to reach. Perceptualbased selection techniques considerably reduce selection time when targets form perceptual groups, as predicted by Gestalt principles of proximity and good continuity. However, they use heuristic and not-validated grouping algorithms. Also, they do not allow editing of a selection or selecting of groups with random configurations. Dehmeshki and Stuerzlinger developed a perceptual-based object group selection technique for mouse-based user interfaces <ref type="bibr" target="#b0">[1]</ref>. In their system double-clicking on an object that is part of multiple (curvi-)linear groups selects all the groups. To deselect an undesired group, the user alt-clicks on its first non-desired object. Three key elements distinguish that system from the present work. First, clicks can specify only the location of a group but not the direction in which a group of objects extends. This makes selection less efficient when objects belong to multiple groups. Second, it provides no support for selecting non-perceptual groups. Finally, their system relies heavily on multiple-clicks, which is not appropriate for pen-based systems. This problem is shared by other techniques that use multi-clicking to cycle through different perceptual interpretations <ref type="bibr" target="#b2">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">PERSEL</head><p>This paper introduces PerSel, a new pen-based object group selection technique, which addresses the mentioned problems. PerSel consists of two components: The first component detects good continuation groups based on a neighborhood graph. The second provides a set of pen-based interaction techniques that use the detected groups to facilitate path-based selection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Detecting Good Continuation Groups</head><p>The system first constructs a neighborhood graph. When the user performs a straight flick gesture starting from inside an object, the system examines all edges in the neighborhood graph that are connected to this object and picks the one which has the closest direction and distance to the gesture. The object and the edge are called the anchor object and the anchor edge, respectively.</p><p>PerSel is based on an implementation of Feldman's model <ref type="bibr" target="#b1">[2]</ref> for linear groups which models paths as groupings of four objects combined with a sliding window paradigm. Given an anchor object 1 and anchor edge e, the algorithm finds all paths of length four starting from 1 and along edge e, see also figure 1. We call these paths primary paths. Then, for each primary path, the method computes a linearity coefficient (LC) that indicates how strongly the four nodes are perceived as a line:</p><formula xml:id="formula_0">LC = exp − (a 2 1 + a 2 2 − 2ra 1 a 2 ) 2s 2 (1 − r 2 )</formula><p>,</p><p>where a 1 and a 2 are the angles between lines connecting the center of objects, and r and s are experimentally determined constants <ref type="bibr" target="#b1">[2]</ref>.</p><p>Primary paths with a LC smaller than a threshold are discarded as they are unlikely to be perceptually salient.If at least one primary path remains, the algorithm continues as follows: for each path it searches its potential continuations by identifying all neighbors of the last object in the path. Then, the four-object window is shifted to include each neighbor and the LC is computed in turn. If the new LC is smaller than a threshold, that completion is ignored, otherwise, the extension of the path with this neighbor is added to a stack. If all of the neighbors of the last node are unacceptable, the original path is kept; otherwise, the original path is discarded since at least one good extension has been found. The algorithm continues until all paths inside the stack have been processed.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> illustrates this. In Fig. <ref type="figure" target="#fig_0">1</ref>-a, the anchor node '1' and anchor edge 'e' are represented by thick borders. All four-vertex windows with small LC are visualized with ovals, while the ones with large LC are shown in rectangles. There are three primary paths, see Fig. <ref type="figure" target="#fig_0">1-b</ref>, but only the one inside the rectangle is considered for further extension. After two more iterations, see Fig. <ref type="figure" target="#fig_0">1-c and d</ref>, only the straight path is returned as being perceptually plausible, see Fig. <ref type="figure" target="#fig_0">1-e</ref>. For line gestures that cross an object, the gesture is first decomposed into two half-gestures, using the closest point of the gesture relative to the center of the object. Then, the linear groups corresponding to each half is found using the above methods, and then the two groups are merged.</p><p>Curvilinearity group detection: Similar to the linear case, when the user performs an arc gesture starting from inside an object, the system examines the edges in the neighborhood graph that are connected to this object. The edge closest to the gesture is picked, as defined by the sum of distances between gesture points and the edge. The rest of algorithm works similar to the linear case, except that: (1) we use the curvilinearity coefficient that adapts the formula for LC by using the deviation from the average angle instead of the angles themselves, and (2) in the initial phase we only consider primary paths that turn in the same direction as the gesture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Gestural Interaction</head><p>In this section we explain the gestural interaction techniques available in PerSel. As common in pen-based systems, tapping on a single object selects it. Also, PerSel cancels all selections whenever the user taps the pen on the background. This affords a simple and fast way to cancel erroneous selections.</p><p>Path Selection: Performing a straight gesture across an object selects the Good Continuation group aligned with the gesture direction. Similarly, an arc gesture across an object selects the curvilinear group that has a similar direction as the gesture. In both cases, the selected group is visualized by links connecting the successive objects, see Fig. <ref type="figure" target="#fig_1">2</ref>.   Resolving Non-Perceptual Groups If a gesture corresponds to multiple potential curvilinear groups, all of them are selected. The user can then disambiguate the section by deselecting the nondesired groups. This is similar to the partial selection technique, in that the paradigm of "cutting" links is used to separate the nondesired objects from the targets, see Fig. <ref type="figure" target="#fig_4">5</ref>.</p><formula xml:id="formula_1">(a) (b) (c) (d)</formula><p>Selecting Paths With Multiple Segments: More complex paths often consist of connected Good Continuity groups (segments). To enable selection of such paths, we introduce a new path editing feature. Assume that a path is already selected. If the user draws a gesture across an already selected node (called an anchor), a supplementary anchor is created. Then the selected path is modified by (1) automatically deselecting objects on the path beyond the new anchor, and (2) adding the new (curvi-)linear group corresponding to the new gesture and anchor to the selection, see Fig. <ref type="figure" target="#fig_5">6</ref>. </p><formula xml:id="formula_2">O1 O2 O3<label>(a) (b) (c) (d)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">CONCLUSION AND FUTURE WORK</head><p>This paper presented PerSel, a new gesture-based selection technique that is based on the Gestalt principle of Good Continuation. Performing a flick gesture crossing an object selects the (curvi-)linear group(s) that the object belongs to and is aligned with the gesture direction. PerSel also provides interaction techniques that allow users to perform partial group selection and selecting groups with arbitrary configurations. As future work we will include Gestalt principle of similarity and extend PerSel to deal with objects with different visual features such as shape and size.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Illustration of Good continuity grouping algorithm. (a) shows the anchor object 1 and edge e. (b) the rectangle and ovals visualize primary paths (c) the path inside the rectangle is extended. (d) shows new potential primary paths. the one inside the rectangle is extended. (e) objects 1-6 are grouped.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Good continuity group selection. Performing a line (a) or arc (c) gesture, selects the corresponding linear (b) or curvilinear (c) group and visualizes it by links. Partial Path Selection: There are two alternatives for partial selection of paths. The first way is to select the complete path and then cut undesired part(s) by drawing a flick gesture across one (or two) visualized links, see Fig. 3. The second alternative is to initiate the selection by a flick gesture from inside an object, see Fig. 4.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Partial group selection. (a) A flick gesture across an object selects the whole group. (b) Two flick gestures deselect all objects beyond these "cut" gestures.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Partial group selection. In (a) respectively (c) a line and an arc gesture start from inside an object. Only objects that are on the same side of gesture are selected, as in (b) and (d).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Resolving ambiguity. (a) An arc gesture on object O1 selects both curvilinear groups. (b) A "cutting" gesture disambiguates which objects to select. (c) Only desired objects remain selected.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: (a) Target objects have thick borders. (b) A line gesture over O1 selects the corresponding group. (c) A gesture from O2 guides the selection. (d) A gesture from O3 adds the remainder of the desired objects.</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Intelligent mouse-based object group selection</title>
		<author>
			<persName><forename type="first">H</forename><surname>Dehmeshki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Stuerzlinger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Smart Graphics</title>
				<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Curvelinearity, covariance, and regularity in perceptual groups</title>
		<author>
			<persName><forename type="first">J</forename><surname>Feldman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Vision Research</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">20</biblScope>
			<biblScope unit="page" from="2835" to="2848" />
			<date type="published" when="1997">97</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A perceptually-supported sketch editor</title>
		<author>
			<persName><forename type="first">E</forename><surname>Saund</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Moran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM Symposium on User Interface Software and Technology-UIST&apos;94</title>
				<meeting>the ACM Symposium on User Interface Software and Technology-UIST&apos;94<address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="175" to="184" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
