<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Toward Interactive Spreadsheet Debugging</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Dietmar</forename><surname>Jannach</surname></persName>
							<email>dietmar.jannach@tu-dortmund.de</email>
							<affiliation key="aff0">
								<orgName type="institution">TU Dortmund</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Thomas</forename><surname>Schmitz</surname></persName>
							<email>thomas.schmitz@tu-dortmund.de</email>
							<affiliation key="aff1">
								<orgName type="institution">TU Dortmund</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kostyantyn</forename><surname>Shchekotykhin</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">University Klagenfurt</orgName>
								<address>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Toward Interactive Spreadsheet Debugging</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">CED8074B96634A98D1E68CCACB8DF4A5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T17:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>H</term>
					<term>4 [Information Systems Applications]: Spreadsheets; D</term>
					<term>2</term>
					<term>8 [Software Engineering]: Testing and Debugging</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Spreadsheet applications are often developed in a comparably unstructured process without rigorous quality assurance mechanisms. Faults in spreadsheets are therefore common and finding the true causes of an unexpected calculation outcome can be tedious already for small spreadsheets. The goal of the Exquisite project is to provide spreadsheet developers with better tool support for fault identification. Exquisite is based on an algorithmic debugging approach relying on the principles of Model-Based Diagnosis and is designed as a plug-in to MS Excel. In this paper, we give an overview of the project, outline open challenges, and sketch different approaches for the interactive minimization of the set of fault candidates.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>Spreadsheet applications are mostly developed in an unstructured, ad-hoc process without detailed domain analysis, principled design or in-depth testing. As a result, spreadsheets might often be of limited quality and contain faults, which is particularly problematic when they are used as decision making aids. Over the last years, researchers have proposed a number of ways of transferring principles, practices and techniques of software engineering to the spreadsheet domain, including modeling approaches, better test support, refactoring, or techniques for problem visualization, fault localization, and repair <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b14">15]</ref>.</p><p>The Exquisite project <ref type="bibr" target="#b11">[12]</ref> continues these lines of research and proposes a constraint-based approach for algorithmic spreadsheet debugging. Technically, the main idea is to translate the spreadsheet under investigation as well as userspecified test cases into a Constraint Satisfaction Problem (CSP) and then use Model-Based Diagnosis (MBD) <ref type="bibr" target="#b13">[14]</ref> to find the diagnosis candidates. In terms of a CSP, each candidate is a set of constraints that have to be modified to correct a failure. In our previous works, we demonstrated the general feasibility of the approach and presented details of an MS Excel plug-in, which allows the user to interactively specify test cases, run the diagnosis process and then explore the possible candidates identified by our algorithm <ref type="bibr" target="#b11">[12]</ref>.</p><p>Using constraint reasoning and diagnosis approaches for spreadsheet debugging -partially in combination with other techniques -was also considered in <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b10">11]</ref>. While these techniques showed promising results in helping users to locate faults in spreadsheets, a number of challenges remain. In this paper, we address the question of how the end user can be better supported in situations when many diagnosis candidates are returned by the reasoning engine. We will sketch different interactive candidate discrimination approaches in which the user is queried by the system about the correctness of individual cells' values and formulas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">DIAGNOSING SPREADSHEETS</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithmic Approach.</head><p>In <ref type="bibr" target="#b13">[14]</ref>, Reiter proposed a domain-independent and logicbased characterization of the MBD problem. A diagnosable system comprises a set of interconnected components Comps, each of which can possibly fail. A system description SD specifies how components behave when they work correctly, i.e., given some inputs, the definitions in SD and Comps determine the expected outputs. In case the expected outputs deviate from what is actually observed, the diagnosis problem consists of identifying a subset of Comps, which, if assumed faulty, explains the observations. The main idea can be transferred to the spreadsheet domain as follows <ref type="bibr" target="#b11">[12]</ref>. In the example shown in Figure <ref type="figure" target="#fig_0">1</ref>, the intended formula in cell C2 should be an addition, but the developer made a typo. When testing the spreadsheet with the inputs {A1=1, A2=6} and the expected output {C1=20}, the user notices an unexpected output (36) in C1. MBD reasoning now aims to find minimal subsets of the possibly faulty components -in our case the cells with formulas -which can explain the observed discrepancy. A closer investigation of the problem reveals that only two minimal explanations exist in our example if we only allow integer values: "C1 is faulty" and "B2 is faulty". The formula in cell B1 alone cannot be the sole cause of the problem with B2 and C1 being correct as 18 is not a factor of the expected value 20, i.e., there is no solution to the equation B1 ¤ 18 20, B1 N. Note that we assume that the constant values in the spreadsheet are correct. However, our approach can be easily extended to deal with erroneous constants.</p><p>In <ref type="bibr" target="#b11">[12]</ref>, we describe a plug-in component for MS Excel, which relies on an enhanced and parallelized version of this diagnostic procedure, additional dependency-based search space pruning, and a technique for fast conflict detection.</p><p>We have evaluated our approach in two ways. (A) We analyzed the required running times using a number of spreadsheets in which we injected faults (mutations). The results showed that our method can find the diagnoses for many small-and mid-sized spreadsheets containing about 100 formulas within a few seconds. (B) We conducted a user study in the form of an error detection exercise involving 24 subjects. The results showed that participants who used the Exquisite tool were both more effective and efficient than those who only relied on MS Excel's standard fault localization mechanisms. A post-experiment questionnaire indicated that both groups would appreciate better tool support for fault localization in commercial tools <ref type="bibr" target="#b11">[12]</ref>.</p><p>The Problem of Discriminating Between Diagnoses.</p><p>While we see our results so far to be promising, an open issue is that the set of diagnosis candidates can be large. Finding the true cause of the error within a larger set of possible explanations can be tedious and, finally, make the approach impractical when a user has to inspect too many alternatives. Since this is a general problem of MBD, the question of how to help the user to better discriminate between the candidates and focus on the most probable ones was in the focus of several works from the early days of MBD research <ref type="bibr" target="#b6">[7]</ref>. In the next section, we will propose two possible remedies for this problem in the spreadsheet domain.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">TOWARD INTERACTIVE DEBUGGING</head><p>Early works in MBD research were dealing with fault diagnosis of electrical circuits. In this domain, an engineer can make additional measurements, e.g., of the voltage at certain nodes. These measurements can then help to reduce the set of candidates because some observations may rule out certain explanations for the observed behavior.</p><p>Each measurement however induces additional costs or effort from the user. One goal of past research was thus to automatically determine "good" measurement points, i.e., those which help to narrow down the candidate space fast and thus minimize the number of required measurements. In <ref type="bibr" target="#b6">[7]</ref>, for example, an approach based on information theory was proposed where the possible measurement points were ranked according to the expected information gain. Given this unluckily chosen test case, MBD returns four possible single-element candidates {B1}, {B2}, {C1}, and {D1}, i.e., every formula could be the cause. To narrow down this set, we could query the user about the correctness of the intermediate results in the cells B1, B2, and C1. If we ask the user for a correct value of B1, then the user will answer "2". Based on that information, B1 can be ruled out as a diagnosis and the problem must be somewhere else. However, if we had asked the user for the correct value of C1, the user would have answered "20" and we could immediately infer that {D1} remains as the only possible diagnosis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Example</head><p>The question arises how we can automatically determine which questions we should ask to the user. Next, we sketch two possible strategies for interactive querying.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Querying for Cell Values</head><p>The first approach (Algorithm 1) is based on interactively asking the user about the correct values of intermediate cells as done in the example <ref type="foot" target="#foot_0">1</ref> . The goal of the algorithm is to minimize the number of required interactions. Therefore, as long as there is more than one diagnosis, we determine which question would help us most to reduce the set of remaining diagnoses. To do so, we check for each possible question (intermediate cell c), how many diagnoses would remain if we knew that the cell value val is correct given the test case T . Since every diagnosis candidate d corresponds to a relaxed version CSP 1 of the original CSP, where the latter is a translation of the spreadsheet P and the test case T , we check if CSP 1 together with the assignment tc valu has a solution. Next, we ask the user for the correct value of the cell for which the smallest number of remaining diagnoses was observed. The user-provided value is then added to the set of values known to be correct for T and the process is repeated.</p><p>To test the approach, we used a number of spreadsheets containing faults from <ref type="bibr" target="#b11">[12]</ref> it requires to isolate the single correct diagnosis using Algorithm 1 and compared it to a random measurement strategy.</p><p>The results given in Table <ref type="table" target="#tab_0">1</ref> show that for the tested examples the number of required interactions can be measurably lowered compared to a random strategy. The sales forecast spreadsheet, for example, comprises 143 formulas (#C) and contains 1 fault (#F). Using our approach, only 11 cells (Min) have to be inspected by the user to find the diagnosis explaining a fault within 89 diagnosis candidates (#D). Repeated runs of the randomized strategy lead to more than 45 interactions (sRand) on average.</p><p>As our preliminary evaluation shows, the developed heuristic decreases the number of user interaction required to find the correct diagnosis. In our future work, we will also consider other heuristics. Note however that there are also problem settings in which all possible queries lead to the same reduction of the candidate space. Nonetheless, as the approach shows to be helpful at least in some cases, we plan to explore the following extensions in the future.</p><p>Instead of asking for expected cell values we can ask for the correctness of individual calculated values or for range specifications. This requires less effort by the user but does not guarantee that one single candidate can be isolated.</p><p>Additional test cases can help to rule out some candidates. Thus, we plan to explore techniques for automated test-case generation. As spreadsheets often consist of multiple blocks of calculations which only have few links to other parts of the program, one technique could be to generate test cases for such smaller fragments, which are easier to validate for the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Querying for Formula Correctness</head><p>Calculating expected values for intermediate cells can be difficult for users as they have to consider also the cells preceding the one under investigation. Thus, we propose additional strategies in which we ask for the correctness of individual formulas. Answering such queries can in the best case be done by inspecting only one particular formula.</p><p>1. We can query the user about the elements of the most probable diagnoses as done in <ref type="bibr" target="#b15">[16]</ref>, e.g., by limiting the search depth and by estimating fault probabilities.</p><p>2. In case of multiple-fault diagnoses, we can ask the user to inspect those formulas first that appear in the most diagnoses. If one cell appears in all diagnoses, it must definitely contain an error.</p><p>3. After having queried the user about the correctness of one particular formula, we can search for copy-equivalent formulas and ask the user to confirm the correctness of these formulas. The rationale of this last strategy, which we will now discuss in more detail, is that in many real-world spreadsheets, structurally identical formulas exist in neighboring cells, which, for example, perform a row-wise aggregation of cell values. Such repetitive structures are one of the major reasons that the number of diagnosis candidates grows quickly. Thus, when the user has inspected one formula, we can ask him if the given answer also applies to all copyequivalent formulas, which we can automatically detect.</p><p>In the example spreadsheet shown in Figure <ref type="figure" target="#fig_4">3</ref> the user made a mistake in cell M13 entering a minus instead of the intended plus symbol. A basic MBD method would in the worst case and depending on the test cases return every single formula as equally ranked diagnosis candidates. When applying the value-based query strategy of Section 3.2, the user would be asked to give feedback on the values of M1 to M12, which however requires a lot of manual calculations.</p><p>With the techniques proposed in this section, the formulas of the spreadsheet would first be ranked based on their fault probability. Let us assume that our heuristics say that users most probably make mistakes when writing IF-statements. In addition, the formula M13 is syntactically more complex as those in M1 to M12 and thus more probable to be faulty.</p><p>Based on this ranking, we would, for example, ask the user to inspect the formula of G1 first. Given the feedback that the formula is correct, we can ask the user to check the copy-equivalent formulas of G1 to L12. This task, however, can be very easily done by the user by navigating through these cells and by checking if the formulas properly reflect the intended semantics, i.e., that the formulas were copied correctly. After that, the user is asked to inspect the formula M13 according to the heuristic which is actually the faulty one. In this example, we thus only needed 3 user interactions to find the cause of the error. In a random strategy, the user would have to inspect up to half of the formulas in average depending on the test case. The evaluation of the described techniques is part of our ongoing work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">User Acceptance Issues</head><p>Independent of the chosen strategy, user studies have to be performed to assess which kinds of user input one can realistically expect, e.g., for which problem scenarios the user should be able to provide expected (ranges of) values for intermediate cells. In addition, the spreadsheet inspection exercise conducted in <ref type="bibr" target="#b11">[12]</ref> indicates that users follow different fault localization strategies: some users for example start from the inputs whereas others begin at the "result cells". Any interactive querying strategy should therefore be carefully designed and assessed with real users. As a part of a future work we furthermore plan to develop heuristics to select one of several possible debugging techniques depending on the users problem identification strategy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">ADDITIONAL CHALLENGES</head><p>Other open issues in the context of MBD-based debugging that we plan to investigate in future work include the following aspects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Probability-Based Ranking.</head><p>Another approach to discriminate between diagnoses is to try to rank the sometimes numerous candidates in a way that those considered to be the most probable ones are listed first. Typically, one would for example list diagnoses candidates of smaller cardinality first, assuming that single faults are more probable than double faults. In addition, we can use fault statistics from the literature for different types of faults to estimate the probability of each diagnosis. In the spreadsheet domain, we could also rely on indicators like formula complexity or other spreadsheet smells <ref type="bibr" target="#b9">[10]</ref>, the location of the cell within the spreadsheet's overall structure, results from Spectrum-Based Fault Localization <ref type="bibr" target="#b10">[11]</ref>, or the number of recent changes made to a formula. User studies in the form of spreadsheet construction exercises as done in <ref type="bibr" target="#b5">[6]</ref> can help to identify or validate such heuristics.</p><p>Problem Encoding and Running Times.</p><p>For larger problem instances, the required running times for the diagnosis can exceed what is acceptable for interactive debugging. Faster commercial constraint solvers can alleviate this problem to some extent. However, also automated problem decomposition and dependency analysis methods represent a powerful means to be further explored to reduce the search complexity.</p><p>Another open issue is that in works relying on a CSPencoding of the spreadsheets, e.g., <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b10">11]</ref> and our work, the calculations are limited to integers, which is caused by the limited floating-point support of free constraint solvers. More research is required in this area, including the incorporation of alternative reasoning approaches like, e.g., linear optimization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>User Interface Design.</head><p>Finally, as spreadsheet developers are usually not programmers, the user interface (UI) design plays a central role and suitable UI metaphors and a corresponding nontechnical terminology have to be developed. In Exquisite, we tried to leave the user as much as possible within the known MS Excel environment. Certain concepts like "test cases" are, however, not present in modern spreadsheet tools and require some learning effort from the developer. The recent work of <ref type="bibr" target="#b7">[8]</ref> indicates that users are willing to spend some extra effort, e.g., in test case specification, to end up with more fault-free spreadsheets.</p><p>How the interaction mechanisms actually should be designed to be usable at least by experienced users, is largely open in our view. In previous spreadsheet testing and debugging approaches like <ref type="bibr" target="#b0">[1]</ref> or <ref type="bibr" target="#b1">[2]</ref>, for example, additional input was required by the user. In-depth studies about the usability of these extensions to standard spreadsheet environments are quite rare.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">SUMMARY</head><p>In this paper, we have discussed perspectives of constraint and model-based approaches for algorithmic spreadsheet debugging. Based on our insights obtained so far from the Exquisite project, we have identified a number of open challenges in the domain and outlined approaches for interactive spreadsheet debugging.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A spreadsheet with an unexpected output</figDesc><graphic coords="1,334.41,459.04,200.83,74.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2</head><label>2</label><figDesc>Figure 2 exemplifies how additional measurements (inputs by the user) can help us to find the cause of a fault. The example is based on the one from Figure 1. The user has corrected the formula in C1 and added a formula in D1 that should multiply the value of C1 by 10. Again, a typo was made and the observed result in D1 is 30 instead of the expected value of 200 for the input values {A1=1, A2=6}.Given this unluckily chosen test case, MBD returns four possible single-element candidates {B1}, {B2}, {C1}, and {D1}, i.e., every formula could be the cause. To narrow down this set, we could query the user about the correctness of the intermediate results in the cells B1, B2, and C1.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Another faulty spreadsheet</figDesc><graphic coords="2,318.37,53.80,236.00,76.46" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Algorithm 1 :</head><label>1</label><figDesc>Querying cell values Input: A faulty spreadsheet P, a test case T S = Diagnoses for P given T while |S | ¡ 1 do foreach intermediate cell c P not asked so far do val = computed value of c given T count(c) = 0 foreach Diagnosis d S do CSP 1 CSP of P given T z Constraints(d) if CSP 1 tc valu has a solution then inc(count(c)) Query user for expected value v of the cell c where count(c) is minimal T T tc vu S = Diagnoses for P given T</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: A small extract of a faulty spreadsheet with structurally identical formulas</figDesc><graphic coords="3,316.83,53.80,235.98,71.54" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Results for querying cell values.</figDesc><table><row><cell>, measured how many interactions</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Actually, a user-provided range restriction for C1(15 C1   </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="25" xml:id="foot_1">) would have been sufficient in the example.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work was supported by the EU through the programme "Europäischer Fonds für regionale Entwicklung -Investition in unsere Zukunft" under contract number 300251802.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">AutoTest: A Tool for Automatic Test Case Generation in Spreadsheets</title>
		<author>
			<persName><forename type="first">R</forename><surname>Abraham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Erwig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings VL/HCC 2006</title>
				<meeting>VL/HCC 2006</meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="43" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">GoalDebug: A Spreadsheet Debugger for End Users</title>
		<author>
			<persName><forename type="first">R</forename><surname>Abraham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Erwig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICSE 2007</title>
				<meeting>ICSE 2007</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="251" to="260" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Constraint-based Debugging of Spreadsheets</title>
		<author>
			<persName><forename type="first">R</forename><surname>Abreu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Riboira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wotawa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. CIbSE 2012</title>
				<meeting>CIbSE 2012</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The Right Choice Matters! SMT Solving Substantially Improves Model-Based Debugging of Spreadsheets</title>
		<author>
			<persName><forename type="first">S</forename><surname>Außerlechner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fruhmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wieser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Spörk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mühlbacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wotawa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. QSIC 2013</title>
				<meeting>QSIC 2013</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="139" to="148" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Refactoring meets Spreadsheet Formulas</title>
		<author>
			<persName><forename type="first">S</forename><surname>Badame</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dig</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICSM 2012</title>
				<meeting>ICSM 2012</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="399" to="409" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">An Experimental Study of People Creating Spreadsheets</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Gould</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM TOIS</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="258" to="272" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Diagnosing Multiple Faults</title>
		<author>
			<persName><forename type="first">J</forename><surname>De Kleer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">C</forename><surname>Williams</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="97" to="130" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Improving Spreadsheet Test Practices</title>
		<author>
			<persName><forename type="first">F</forename><surname>Hermans</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. CASCON 2013</title>
				<meeting>CASCON 2013</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="56" to="69" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Supporting Professional Spreadsheet Users by Generating Leveled Dataflow Diagrams</title>
		<author>
			<persName><forename type="first">F</forename><surname>Hermans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pinzger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Deursen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICSE 2011</title>
				<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="451" to="460" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Detecting Code Smells in Spreadsheet Formulas</title>
		<author>
			<persName><forename type="first">F</forename><surname>Hermans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pinzger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Deursen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICSM 2012</title>
				<meeting>ICSM 2012</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="409" to="418" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">On the Empirical Evaluation of Fault Localization Techniques for Spreadsheets</title>
		<author>
			<persName><forename type="first">B</forename><surname>Hofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Riboira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wotawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Abreu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Getzner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. FASE 2013</title>
				<meeting>FASE 2013</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="68" to="82" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Model-based diagnosis of spreadsheet programs -A constraint-based debugging approach</title>
		<author>
			<persName><forename type="first">D</forename><surname>Jannach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schmitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Autom Softw Eng</title>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note>to appear</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Avoiding, finding and fixing spreadsheet errors -a survey of automated approaches for spreadsheet QA</title>
		<author>
			<persName><forename type="first">D</forename><surname>Jannach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schmitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wotawa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Systems and Software</title>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
	<note>to appear</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A Theory of Diagnosis from First Principles</title>
		<author>
			<persName><forename type="first">R</forename><surname>Reiter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="57" to="95" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">What You See Is What You Test: A Methodology for Testing Form-Based Visual programs</title>
		<author>
			<persName><forename type="first">G</forename><surname>Rothermel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dupuis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Burnett</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ICSE 1998</title>
				<meeting>ICSE 1998</meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="198" to="207" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Interactive ontology debugging: Two query strategies for efficient fault localization</title>
		<author>
			<persName><forename type="first">K</forename><surname>Shchekotykhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Friedrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fleiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rodler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="788" to="103" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
