<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Testing Computer Vision Applications An Experience Report on Introducing Code Coverage Analysis in the Field</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Iulia</forename><surname>Nica</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Franz</forename><surname>Wotawa</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Gerhard</forename><surname>Jakob</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Kathrin</forename><surname>Juhart</surname></persName>
						</author>
						<title level="a" type="main">Testing Computer Vision Applications An Experience Report on Introducing Code Coverage Analysis in the Field</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">742E597A28335C86CC48AFB5D790F5B7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T19:42+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper we present our work in progress in defining a suitable testing and validation methodology to be used within computer vision (CV) projects. Typical quality assurance (QA) measures, targeting the applicability in real-world scenarios, are meant here to complement the research on specific computer vision methods. While inspecting the existing literature in the domain of CV performance evaluation, we first identified the main challenges the CV researchers have to deal with. Second, as every vision algorithm eventually takes the form of a software program, we followed the classic software development process and performed an in depth code coverage analysis in order to assure the quality of our test suites and pinpoint code areas that need to be reviewed. This further leaves us with the questions of which test coverage tool to prefer in our situation and whether we can introduce some specific evaluation criteria for identifying the right tool to be used within a CV project. In this article we also contribute to answering these questions.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Motivation</head><p>Computer vision (CV) is used today in a wide range of real-world applications, from industrial inspection and safety relevant vehicle functions to 3D model generation by photogrammetric methods, medical imaging and fingerprint recognition. Although a vast variety of literature covering evaluation techniques in subfields of the whole topic is available, still no study reports on testing a complete vision system, i.e., comprising hardware, software, data communication and control. Obviously the high quality of CV applications has a great impact on their usability in real world scenarios. Hence beside traditional CV evaluation techniques such as using test data sets as input and comparing the algorithms output against a manually established ground truth -we have to control the quality of the involved applications by means of applying a more generic evaluation strategy. In this context, quality assurance (QA) activities like peer reviews, coding guidelines, or the usage of software quality tools (static and dynamic analyzers) offer many benefits, from being able to track the CV projects progress and estimate its relative complexity to helping us realize when we have achieved the desired state of quality <ref type="bibr" target="#b3">[4]</ref>.</p><p>Still, what is different about testing CV applications and why is it so difficult to test whether computer vision algorithms can live up to their claims?</p><p>Regarding algorithmic correctness on one side, it is often very hard to get a consistent and exact definition of the desired output for a specific input. Especially in classification tasks, it is tough to decide, when the obtained results are still correct and when we are dealing with an abnormal behavior.</p><p>Regarding the evaluation of the complete, often very complex vision system on the other side, the QA team has to manage and run a high amount of tests on all levels -from unit tests, to integration, function and system tests. Therefore one needs to understand the system as a whole, as well as all of its components and their interdependencies. Furthermore we have to cover also possible hardware faults when identifying use cases, based on the defined system requirements and specifications. Fortunately, today there are wellestablished QA practices and many quality management tools available on the market, meant to ease the generic evaluation of products and processes, that the only challenge is to find a proper manner to integrate them in the vision project.</p><p>The remainder of this paper is organized as follows. In Section 2 we review the existing literature in the domain of CV performance evaluation and introduce some basic quality assurance terms. Afterwards, in Section 3, we identify and discuss the requirements a code coverage tool has to fulfill in order to be used in the CV domain. Further on, we give a short overview of our four best ranked tools. In Section 4 we first introduce the case study and compare the tools based on their integration with the example application. Additionally, we present the first success story in improving our code coverage. With Section 5 we conclude this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Research</head><p>In our work, we have first inspected the existing literature in the domain of performance evaluation in computer vision. General overviews of empirical evaluations were found in <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>, and <ref type="bibr" target="#b10">[11]</ref> and will be further presented here in a chronological order. They all review the commonly used techniques for performance characterization of algorithms in different subfields of CV.</p><p>In the early 90s, <ref type="bibr" target="#b4">[5]</ref> was discussing the evident lack of performance evaluation in the literature on vision algorithms. In the author's opinion, this situation has been tolerated because the ability to perform a CV task was interesting enough, so that the performance of the new algorithm became a secondary issue. In order to quickly design a machine vision system, which works efficiently and meets requirements, <ref type="bibr" target="#b4">[5]</ref> suggests an analogy with a system's engineering methodology. Thus, a well-defined protocol containing a modeling component, an experimental component and a data analysis component was envisioned. The modeling component would describe the ideal input image population (real or synthetic images), the random perturbation model (by which non-ideal images arise), the random perturbation process (that characterizes the output random perturbation as function of input random perturbation) and the criterion function (by which one can quantify the difference between the ideal output and the computed output). The experimental component describes the performed experiments, whilst the data analysis determines the performance characterization based on the experimentally observed data.</p><p>In the absence of acknowledged methods for the evaluation of algorithmic performance, <ref type="bibr" target="#b1">[2]</ref> proposed the definition of performance as function of mathematical sophistication. However, as the number and specificity of assumptions made in the mathematics underlying a vision algorithm increase (i.e., the sophistication of an algorithm increases), the performance of the CV application not necessarily does. This is the case when the assumptions made do not match the application characteristics. Furthermore, the need of standard databases, evaluation protocols and scoring methods/performance metrics available to researchers was identified by the authors.</p><p>Regarding the typology of test data, <ref type="bibr" target="#b2">[3]</ref> differentiate first between data without noise and data with noise. Moreover, they mention three types of empirical testing: testing using real data with full control, empirical testing with partially controlled test data and testing in an uncontrolled environment. Depending on the distribution of the available data into training and testing sets, test protocols have been proposed. Another discussed issue in <ref type="bibr" target="#b2">[3]</ref> is again the necessity to define a metric, which can be used to quantify performance. The authors associate such performance metrics with the failure modes of an algorithm. For each type of vision algorithm, specific evaluation metrics were defined according to the function performed by the given algorithm. Some examples are the ROC (Receiver Operating Characteristic) curve in case of a feature detector, the confusion matrix in case of object recognition, or the true and false matches when dealing with matching algorithms, such as those used in stereo or motion estimation.</p><p>Similarly to <ref type="bibr" target="#b2">[3]</ref>, the authors of <ref type="bibr" target="#b10">[11]</ref> outline two different levels of analysis for vision systems:</p><p>• technology evaluation, which concerns the characteristics of the algorithms using generic metrics, such as ROC curves. Standardized data sets are used and the results are therefore repeatable and depend on the size and scope of the test data sets. Generally, this evaluation stage requires simple metrics related to the fulfilled function-detection, estimation, classification. • scenario evaluation, which concerns the system's behavior in particular situations -for a specific functionality with its sets of variables (e.g, number of users, type of lighting). The test data is based on a controlled real world and is therefore only partly reproducible. More complex metrics are to be used here, e.g., system reliability expressed as mean time between failures.</p><p>[11] takes the topic of technology evaluation a step further by defining a set of eight key questions, thought to highlight the best practices and the state of evaluation methodology in several representative areas of computer vision discipline: sensor characterization, feature detection, shape-and grey-level-based object localization, shape-based object indexing: recognition, lossy image and video compression, differential optical flow, stereo vision, face recognition, measuring structural differences in medical images. From the guiding questions formulated in <ref type="bibr" target="#b10">[11]</ref>, we selected those, which are in our opinion first to be answered in algorithmic testing:</p><p>1. Is there a data set for which the correct answers are known? 2. Are there data sets in common use?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.</head><p>Are there any known algorithms that can be used as benchmarks for comparison? 4. What should we be measuring to quantify performance? What metrics are used?</p><p>Though the analysis in <ref type="bibr" target="#b10">[11]</ref> touches also other aspects of building a complete vision system, it excludes testing the hardware. Furthermore, the mentioned software validation is limited to ensuring that the software implementation of an algorithm correctly instantiates its mathematical foundation <ref type="bibr" target="#b10">[11]</ref>. Hence, the collected answers for each of the considered visual tasks indicate the fact that performance characterization techniques are mostly application/algorithm specific and that currently they do not refer to the integrated system as a whole, i.e., comprising hardware, software, data communication and control.</p><p>More currently published research like <ref type="bibr" target="#b12">[13]</ref>, <ref type="bibr" target="#b13">[14]</ref> emphasizes the role of test data generation and test data validation in vision testing. For the purpose of evaluating CV algorithms, there are today some publicly available data sets, such as the FERET database <ref type="bibr" target="#b8">[9]</ref> for face recognition algorithms, Middlebury <ref type="bibr" target="#b9">[10]</ref> and KITTI <ref type="bibr" target="#b6">[7]</ref> test data sets for stereo vision, or VOT datasets <ref type="bibr" target="#b5">[6]</ref> for visual tracking. The usage of this large amount of test images brings yet some problems. One of them is that the test data sets are not specially designed for a particular vision application, but for a class of algorithms. Hence, a 100% coverage of the possible scenarios can not be guaranteed. As introduced in <ref type="bibr" target="#b12">[13]</ref> and further elaborated in <ref type="bibr" target="#b13">[14]</ref>, a solution to this problem would be the automatic generation of datasets, so that they contain all the typical scenes and hazards, without including too much redundancy, so that the testing effort could be manageable.</p><p>For a change, as the vision algorithm will take eventually the form of a software program, we see no reason why we should not take advantage of the great progress in the domain of quality assurance and software testing in particular. The usage of standardized QA methods, metrics and tools can ease the work of any CV developer and quickly improve the overall process, especially in terms of system's resilience and end user's satisfaction.</p><p>"Quality control activities determine whether a product conforms to its requirements, specifications, or pertinent standards" <ref type="bibr" target="#b11">[12]</ref>. In addition to the traditional testing practices, QA activities encompass peer reviews, coding guidelines, and also the usage of software quality tools, like static analyzers that examine source code for possible errors or code coverage analysis tools, that can measure the actual coverage of the software with the available test data sets. For more information on software testing and other QA techniques we refer the interested reader to <ref type="bibr" target="#b7">[8]</ref>, <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b11">[12]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Code Coverage Analysis</head><p>Among the first quality assurance metrics invented for systematic software testing, code coverage is used to describe the degree to which the source code of a program is tested by a particular test suite. Test coverage can be used in unit testing, regression testing, for test case order optimization, test suite augmentation or test suite minimization.</p><p>The code coverage analysis process is generally divided into code instrumentation, data gathering, and coverage analysis. Code instrumentation consists in inserting some additional statements, that monitor the execution of the source code. The instrumentation can be done basically at code level in a separate pre-processing phase or at runtime.</p><p>In order to be self-contained, we briefly introduce here the most commonly used code coverage metrics, as they might be new to the computer vision community. We further refer to the following small code snippet to quickly highlight their major advantages/ disadvantages in practice:</p><formula xml:id="formula_0">if (x&gt;1 &amp;&amp; y==0) { z=z+1; } if (x==2 || z&gt;1) { z=z+2; }</formula><p>As already mentioned, several kinds of instrumentation are possible. The most common are for:</p><p>• line or statement coverage: where the tool instruments the execution of every executable source code line; this coverage criterion is a rather poor one, as it is completely insensitive to some control structures and logical operators.</p><p>For instance, one could execute every statement (reaching a 100% line coverage) from our example by writing a single test case: T1(x=2, y=0, z=4). Now, let us assume that the second decision should have stated z&gt;0. If so, this error would not be detected. Or perhaps in the first decision should be an or rather than an and. This error would also go undetected. • decision or branch coverage: it reports whether each decision has a true and a false outcome at least once; this criterion is stronger than line coverage, but it is still rather weak.</p><p>For instance, with our previous test-case inputs T1(x=2, y=0, z=4) and a new one T2(x=3, y=1, z=1), we can reach full decision coverage. However, if in the second decision we should have had z&lt;1 instead of z&gt;1, the mistake would not be detected by the two test cases. • condition coverage: in this case, one has to write enough test cases to ensure that each condition in a decision takes on both true and false outcomes at least once; this metric is similar to decision coverage, but has better sensitivity to the control flow. However, full condition coverage does not guarantee full decision coverage.</p><p>For instance, the following test cases: T3(x=1, y=0, z=4) and T4(x=2, y=1, z=1) cover all conditions' outcomes, but they cover only two of four decisions' outcomes. • function coverage: reports whether each function is called (and how many times); it is useful during preliminary testing to quickly find coarse deficiencies in a test suite.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">CV tailored Evaluation Criteria</head><p>Following the classic software development process depicted in Figure <ref type="figure" target="#fig_0">1</ref>, we first learned that code coverage analysis does not exist in most of the CV projects. As a result, we tried to identify the musthave and nice-to-have features of a code coverage tool to be used in the CV application domain. Like in any tool selection process, one has to clarify first the user's requirements. We will further present only those particular requirements related to computer vision software, and neglect general questions such as: what platforms can the tool run on, what is the target application's language or which are the supported compilers. We will not mention here requirements coming from the quality assurance team, which are to be discussed in the next section.</p><p>The following list ranks the priorities of these specific features, as discussed with CV software developers:</p><p>1. working with templates: due to the great variety of data types (pixel and parameter types), there is a tremendous number of tem- plates defined in CV applications and which have to be taken into consideration when analyzing the code coverage. Tools that cannot handle templates appropriately are dismissed. 2. unit testing support: in our case, CPPUnit unit testing framework support is needed, as this is the most frequently used framework in C/C++ CV applications. 3. excluding 3rd party libraries from the coverage analysis: as most of the CV applications make use of third party libraries, whose analysis is obviously not desired, the tool has to provide a simple way to hook/instrument only certain files. 4. automated testing/non-interactive testing: taken into consideration the high complexity of the currently developed CV software, an easy automation of the test coverage analysis is essential.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">performance under big test data amounts:</head><p>There is no doubt that the insertion of instrumentation will increase the code size and affect the instrumented applications performance, i.e., it will use more memory and run slower. A low performance overhead is of course desired, however, considering the complexity of the target programs, our requirement is that the analysis tool does not crash.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Four state-of-the-art Code Coverage Tools</head><p>Identifying the right tool for code coverage analysis in vision applications can lead to major productivity improvements and implicitly to increases in the release quality of the overall computer vision system. Hence, various free and commercial coverage analyzers have been inspected and compared. As a large variety of coverage metrics exist (see the preceding summary), the QA team imposed as requirement that the code coverage tool should be able to measure at least condition coverage. This requirement together with the previously presented CV tailored evaluation criteria have led to limiting our comparative evaluation to the following four state-of-the-art commercial coverage tools: C++ coverage validator • Filter: General image-filter (arithmetic, logic, etc.,), that convert, in principle, a pixel in the source image(s) to a pixel in the target image • KernelFilter: Core-based image filters (mean, median, etc.), which do not calculate any convolution • KeyPoint: Description of key points for various detectors • Operation: Operations on images whose result, or whose source is not an image (source no image: filling images, etc., or target no image: the sum of all the pixels in the image) • Pyramid: Generation of pyramid representations (Gauss, etc.) • Segmentation: Image-based operations that compute segmentations from arbitrary source images (Watershed, RegionGrowing, etc.) • Sift: Special version of a Sift detector.</p><p>For the Dibgen experiments we used the same unit test suites and the same configuration for all the four coverage tools. Although each tool features more than just decision and function coverage, we will merely present the comparison of these two types of coverage measurements, as only they are computed by all the four tools.</p><p>The tests carried out for the Dibgiom experiments are also unit tests, in which the source data is generated either directly by means of using unit-test programs (usually only for very simple algorithms), or by reading the image data from files. In the latter case, the expected outcome is generated with other reference implementations chosen from the literature (like MATLAB, OpenCV, etc.) and it is further compared with the outcome produced by Dibgiom.</p><p>In Table <ref type="table" target="#tab_3">1</ref> we list the global results for the whole Dibgen test application, while in Table <ref type="table" target="#tab_4">2</ref> and Table <ref type="table" target="#tab_5">3</ref> we present the coverage results per directory. It is worth noting that with Testwell CTC++, the coverage results are extremely low, while the other three tools compute comparable coverage results. Table <ref type="table" target="#tab_1">4</ref> depicts the running times for the normal, uninstrumented program and for the instrumented programs. Note that the tests were run on a notebook with Intel(R) Core(TM) i7-4500U CPU 1.80 GHz and 8 GB of RAM running under Windows 10 Pro. Although the running time for the program invoked by Coverage Validator is approximately six times higher, we have no source code instrumentation involved, i.e., there is no need to recompile or relink the target program. The only requirement is the existence of PDB files with debug information and/or MAP files with line number information. Therefore we chose to further use the Coverage Validator tool for the first Dibgiom experiments. The results can be seen in Table <ref type="table" target="#tab_2">5</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">First Success Stories</head><p>One of the most complex and frequently used basic libraries in the JR's CV applications is the library ParameterPool from the Dibgen collection. With about 17.000 LOC, the library is used to store any kind of parameters of arbitrary types in one container. Each parameter can be combined with validity information, access level permission for user interface based parameter modifications, as well as several kinds of descriptive text (unit, help text). Additionally, parameters can be grouped together and it is possible to define several types of parameter dependencies. Since this library is used heavily in nearly every JR CV application, the JR developers particularly paid attention to test it thoroughly from the very start of development. However, first code coverage analysis showed dissenting results, especially in branch, function and line coverage, while at least file coverage could reach nearly acceptable results (see Figure <ref type="figure" target="#fig_1">2</ref>). More detailed analysis showed that only 12 out of 36 source code files had a line coverage better than 90%, while 9 files were not tested at all (see Figure <ref type="figure" target="#fig_3">4</ref>). Although the remaining 15 files were tested at least partially from the line coverage point of view, especially their branch coverage showed very poor results. After particular review of the tested source code, the used test code as well as the used test data, the test code has been adapted in some places and some test data sets have been slightly modified.</p><p>Additionally, some new test functions were developed, especially for previously untested files or functions. One meanwhile unused       source code file could be entirely removed. Two of the untested source code files contained only source code that is used to disable default class behavior (make default constructor, copy constructor and/or assignment operator private), which makes this code untestable by design. Altogether, all of these mentioned modifications did not touch more than 10% of the test code, but resulted in a huge improvement in all code coverage measures (see Figure <ref type="figure" target="#fig_2">3</ref>). As one can see in Figure <ref type="figure" target="#fig_4">5</ref>, now from the remaining 35 source code files, 32 reach a line coverage above 90% (23 of which even reach 100% -compared to only 9 before the modifications were made). The 2 still remaining untested files contain the above mentioned disabling source code. By improving the test code and the test data for the exemplarily chosen library, 3 implementation errors were found and corrected, 2 of which can be considered to potentially cause major problems in applications. Spending some effort in QA and improving the coverage of the tested source code will already pay off in the near future in several stages of the testing process; especially in regression and integration tests.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions</head><p>In this paper we presented our first steps in the direction of constructing a generic testing and evaluation protocol for CV applications. In our view, the performance characterization methodology in the domain can successfully be complemented with well known techniques borrowed from a typical quality assurance process.</p><p>The conducted experiments on JR's source code demonstrated that with little effort, by means of using a code coverage analysis tool for the available unit tests, the CV developers can considerably improve their code, and implicitly the release quality of the overall CV system.</p><p>After finishing unit/module-testing the program, we have to perform higher-order testing, as for instance integration and system tests (see Figure <ref type="figure" target="#fig_0">1</ref>), in order to complete the testing process. Therefore, together with JR, we analyzed the requirements and possible use cases/hazards of one CV application, which was chosen as representative candidate in the Vision+ project 3 . We paid particular attention to the process of test case definition, with focus on: requirement(s)(from the requirements specification) related to a particular test case, its prerequisites (any conditions that must be fulfilled prior to executing the test), its detailed setup and preferred execution procedure (automated/manual). However, as usually a test management tool is used to accomplish the task, we further encourage CV developers to consider the integration of such a tool in their projects. Our colleagues from JR have already started out on analyzing the test management tools available for managing functional software and hardware testing in agile development projects. Some of the benefits one gains are the assurance of the complete test cycle, the repeatability of tests as well as the automatic generation of statistics and reports.</p><p>Finally, we would like to summarize the main ideas, which will further lead our work presented in this paper. On one hand, as resources are always limited, we have to find the right mixture of QA techniques and to focus towards specific CV pain points. In order to do this, it is important to determine the desired quality attributes for CV applications. On the other hand, we have to find a way to derive applicability rules for certain sets of CV algorithmic classes. Due to the vast diversity of CV algorithms, these tasks are rather difficult, 3 http://comet-visionplus.at/ however, the classical hierarchy of vision systems, which groups the them into low-, mid-and high-level processing levels, could serve as a starting point. At low-level vision, code structure and data representation are still in close correlations (in other words, every pixel has to be treated by some kind of operation/code), thus code improvement by QA directly affects the data quality. For example, filtering operations by convolutions (as those contained in our Dibgiom library) are many simple code snippets executed many times sequentially or in parallel, thus even small code discrepancies produce a large effect, which easily propagate further to higher processing levels. Mid-and high-level vision algorithms on the other hand, are more difficult to tackle, because the representations fall into one of the exponentially many branches of different meta-data types, where often the same meta-data can be produced by fundamentally different code pieces.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. One-to-one correspondence between development and testing processes.</figDesc><graphic coords="3,305.13,71.12,245.08,132.62" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Coverage Validator's summary tab before improvements.</figDesc><graphic coords="6,91.80,71.12,406.47,87.61" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3. Coverage Validator's summary tab after improvements.</figDesc><graphic coords="6,91.80,187.62,406.47,87.33" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4. Coverage Validator's Files and Lines tab before improvements.</figDesc><graphic coords="6,91.80,303.84,406.47,208.43" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. Coverage Validator's Files and Lines tab after improvements.</figDesc><graphic coords="6,91.80,527.71,406.48,193.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>1 , Squish Coco code coverage tool2 , BullseyeCoverage tool3 , Testwell CTC++ analyser4  . JR). Included libraries cover basic, mostly matrix based mathematical operations, color handling and evaluation, as well as generic parameter storage, progress information handling, different types of basic file IO methods often used in computer vision, and value-to-string conversion (and backconversion). All the libraries are implemented using template-heavy C++ code allowing the usage of different data types (pixel types, parameter types) for most of the operations. In terms of volume, Dibgen consists of approximately 100000 LOC.The other partially analyzed collection was Dibgiom. Seen as OpenCV counterpart and based on Dibgen, it contains 15 libraries, which are all used for image processing tasks. The library consists of approximately 9 MB of source code and approximately 255000 LOC. We further provide a brief description of those Dibgiom libraries, which were yet analyzed: • Band: Various representations of image data in the memory (tiled with FileIO for huge satellite data, pure memory-based for rapid CPU access, specially aligned memory layout for acceleration using Intel Performance Primitives, special layout for CUDA acceleration), transparently accessible via the same interface to both user and algorithms. • BandIterator: Generic access iterators for bands regardless of memory layout (see above) • Calibration: Simple radiometric calibration methods • Convolve: Image filter based on convolution (Gauss, Laplace, etc.) • Detect: Various detectors (Extrema, Bright Spot, Corner, etc.)</figDesc><table><row><cell>by JOANNEUM RESEARCH (</cell></row><row><cell>Dibgen is a collection of basic C++ libraries used particularly,</cell></row><row><cell>but not exclusively, in computer vision applications implemented</cell></row></table><note>4 Case Study: Dibgen and Dibgiom Libraries</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 4 .</head><label>4</label><figDesc>Running Times for the unit tests defined for the Dibgen Solution</figDesc><table><row><cell>For non-instrumented programs</cell><cell>68,44 sec</cell></row><row><cell cols="2">For programs invoked by Coverage Validator 475,68 sec</cell></row><row><cell>For CTC++ instrumented programs</cell><cell>74,16 sec</cell></row><row><cell>For Bullseye instrumented programs</cell><cell>68,97 sec</cell></row><row><cell>For Squish Coco instrumented programs</cell><cell>70,81 sec</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 5 .</head><label>5</label><figDesc>Dibgiom Coverage Results computed with C++ Coverage</figDesc><table><row><cell></cell><cell>Validator</cell><cell></cell></row><row><cell>Library</cell><cell cols="2">Decision Coverage Function Coverage</cell></row><row><cell>Band</cell><cell>36,84%</cell><cell>53,55%</cell></row><row><cell>BandIterator</cell><cell>13,94%</cell><cell>67,12%</cell></row><row><cell>Calibration</cell><cell>73,58%</cell><cell>13,33%</cell></row><row><cell>Convolve</cell><cell>14,52%</cell><cell>34,77%</cell></row><row><cell>Detect</cell><cell>56,25%</cell><cell>41,72%</cell></row><row><cell>Filter</cell><cell>30,02%</cell><cell>73,66%</cell></row><row><cell>KernelFilter</cell><cell>40,08%</cell><cell>48,86%</cell></row><row><cell>KeyPoint</cell><cell>86,99%</cell><cell>70,64%</cell></row><row><cell>Operation</cell><cell>54,53%</cell><cell>68,21%</cell></row><row><cell>Pyramid</cell><cell>2,01%</cell><cell>12,71%</cell></row><row><cell>Segmentation</cell><cell>87,72%</cell><cell>87,27%</cell></row><row><cell>Sift</cell><cell>78,10%</cell><cell>66,34%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 1 .</head><label>1</label><figDesc>Overall Dibgen Coverage Results</figDesc><table><row><cell></cell><cell cols="4">C++ Coverage Validator Testwell CTC++ BullseyeCoverage Squish Coco</cell></row><row><cell>Decision Coverage</cell><cell>31,27%</cell><cell>9%</cell><cell>40%</cell><cell>45,30%</cell></row><row><cell>Function Coverage</cell><cell>39,86%</cell><cell>8%</cell><cell>52%</cell><cell>51,95%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 2 .</head><label>2</label><figDesc>Dibgen Coverage Results per Library (Decision Coverage)</figDesc><table><row><cell></cell><cell cols="4">C++ Coverage Validator Testwell CTC++ BullseyeCoverage Squish Coco</cell></row><row><cell>Color</cell><cell>52,91%</cell><cell>9%</cell><cell>75%</cell><cell>78,65%</cell></row><row><cell>Exception</cell><cell>N.A.</cell><cell>30%</cell><cell>12%</cell><cell>40%</cell></row><row><cell>Fileio</cell><cell>15,15%</cell><cell>15%</cell><cell>44%</cell><cell>39,65%</cell></row><row><cell>Internationalisation</cell><cell>55,81%</cell><cell>77%</cell><cell>61%</cell><cell>70,89%</cell></row><row><cell>Math</cell><cell>62,36%</cell><cell>3%</cell><cell>30%</cell><cell>39,54%</cell></row><row><cell>ModuleInterface</cell><cell>14,05%</cell><cell>5%</cell><cell>23%</cell><cell>28,54%</cell></row><row><cell>ParameterPool</cell><cell>12,39%</cell><cell>6%</cell><cell>46%</cell><cell>58,95%</cell></row><row><cell>ParameterPoolDocumentation</cell><cell>N.A.</cell><cell>0%</cell><cell>0%</cell><cell>N.A.</cell></row><row><cell>ProgramOptions</cell><cell>0%</cell><cell>0%</cell><cell>59%</cell><cell>0%</cell></row><row><cell>Progress</cell><cell>55,07%</cell><cell>12%</cell><cell>47%</cell><cell>54,51%</cell></row><row><cell>ResultDataPool</cell><cell>5,31%</cell><cell>1%</cell><cell>20%</cell><cell>29,51%</cell></row><row><cell>Serialization</cell><cell>10,14%</cell><cell>11%</cell><cell>86%</cell><cell>76,31%</cell></row><row><cell>Strings</cell><cell>45,50%</cell><cell>36%</cell><cell>44%</cell><cell>50,66%</cell></row><row><cell>Types</cell><cell></cell><cell>1%</cell><cell>8%</cell><cell>20,28%</cell></row><row><cell>UserDataBase</cell><cell>10,94%</cell><cell>77%</cell><cell>77%</cell><cell>81,51%</cell></row><row><cell>Utilities</cell><cell>0%</cell><cell>3%</cell><cell>0%</cell><cell>23,18%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 3 .</head><label>3</label><figDesc>Dibgen Coverage Results per Library (Function Coverage)</figDesc><table><row><cell></cell><cell cols="4">C++ Coverage Validator Testwell CTC++ BullseyeCoverage Squish Coco</cell></row><row><cell>Color</cell><cell>59,41%</cell><cell>6%</cell><cell>81%</cell><cell>81,17%</cell></row><row><cell>Exception</cell><cell>32,50</cell><cell>36%</cell><cell>45%</cell><cell>45,45%</cell></row><row><cell>Fileio</cell><cell>16,81%</cell><cell>26%</cell><cell>60%</cell><cell>56,52%</cell></row><row><cell>Internationalisation</cell><cell>77,67%</cell><cell>95%</cell><cell>94%</cell><cell>94,44%</cell></row><row><cell>Math</cell><cell>74,22%</cell><cell>3%</cell><cell>47%</cell><cell>47,08%</cell></row><row><cell>ModuleInterface</cell><cell>23,49%</cell><cell>4%</cell><cell>37%</cell><cell>35,68%</cell></row><row><cell>ParameterPool</cell><cell>36,13%</cell><cell>6%</cell><cell>72%</cell><cell>69,17%</cell></row><row><cell>ParameterPoolDocumentation</cell><cell>N.A.</cell><cell>0%</cell><cell>0%</cell><cell>N.A.</cell></row><row><cell>ProgramOptions</cell><cell>0%</cell><cell>0%</cell><cell>50%</cell><cell>0%</cell></row><row><cell>Progress</cell><cell>14,30%</cell><cell>9%</cell><cell>63%</cell><cell>68,46%</cell></row><row><cell>ResultDataPool</cell><cell>8,60%</cell><cell>2%</cell><cell>37%</cell><cell>34,69%</cell></row><row><cell>Serialization</cell><cell>11,22%</cell><cell>6%</cell><cell>86%</cell><cell>86,20%</cell></row><row><cell>Strings</cell><cell>49,46%</cell><cell>37%</cell><cell>65%</cell><cell>64,66%</cell></row><row><cell>Types</cell><cell>51,87%</cell><cell>1%</cell><cell>22%</cell><cell>22,88%</cell></row><row><cell>UserDataBase</cell><cell>10,45%</cell><cell>80%</cell><cell>86%</cell><cell>86,20%</cell></row><row><cell>Utilities</cell><cell>0%</cell><cell>7%</cell><cell>26%</cell><cell>30,76%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Technische Universität Graz, Austria, email: {inica,wotawa}@ist.tugraz.at</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">JOANNEUM RESEARCH, email:{gerhard.jakob,Kathrin.Juhart}@joanneum.at</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_2">http://www.softwareverify.com/cpp-coverage.php</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_3">http://www.froglogic.com/squish/coco/index.php</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_4">http://www.bullseye.com/measurementTechnique.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_5">http://www.verifysoft.com/de cmtx.html</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGEMENTS</head><p>This work was partly funded by BMVIT/BMWFW under COMET programme, project no. 836630, by "Land Steiermark" trough SFG under project no. 1000033937, and by the Vienna Business Agency.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Beizer</surname></persName>
		</author>
		<title level="m">Black-box testing: techniques for functional testing of software and systems</title>
				<meeting><address><addrLine>NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Empirical Evaluation Techniques in Computer Vision</title>
		<author>
			<persName><forename type="first">Kevin</forename><surname>Bowyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathon</forename><surname>Phillips</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>IEEE Computer Society Press</publisher>
			<pubPlace>Los Alamitos, CA, USA</pubPlace>
		</imprint>
	</monogr>
	<note>1st edn</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Imaging and vision systems</title>
		<author>
			<persName><forename type="first">Patrick</forename><surname>Courtney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Neil</forename><forename type="middle">A</forename><surname>Thacker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">chapter Performance Characterisation in Computer Vision: Statistics in Testing and Design</title>
				<meeting><address><addrLine>NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Nova Science Publishers, Inc., Commack</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="109" to="128" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Practical Software Metrics for Project Management and Process Improvement</title>
		<author>
			<persName><forename type="first">Robert</forename><forename type="middle">B</forename><surname>Grady</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1992">1992</date>
			<publisher>Prentice-Hall, Inc</publisher>
			<pubPlace>Upper Saddle River, NJ, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Performance characterization in computer vision</title>
		<author>
			<persName><forename type="first">Robert</forename><forename type="middle">M</forename><surname>Haralick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CVGIP: Image Underst</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="245" to="249" />
			<date type="published" when="1994-09">September 1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">A novel performance evaluation methodology for single-target trackers</title>
		<author>
			<persName><forename type="first">Matej</forename><surname>Kristan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jiri</forename><surname>Matas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ales</forename><surname>Leonardis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Vojir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Roman</forename><forename type="middle">P</forename><surname>Pflugfelder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gustavo</forename><surname>Fernández</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Georg</forename><surname>Nebehay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fatih</forename><surname>Porikli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luka</forename><surname>Cehovin</surname></persName>
		</author>
		<idno>CoRR, abs/1503.01313</idno>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Object scene flow for autonomous vehicles</title>
		<author>
			<persName><forename type="first">Moritz</forename><surname>Menze</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andreas</forename><surname>Geiger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">The Art of Software Testing</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Myers</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<pubPlace>New Jersey</pubPlace>
		</imprint>
	</monogr>
	<note>Second Edition edn</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The FERET Evaluation Methodology for Face-Recognition Algorithms</title>
		<author>
			<persName><forename type="first">P</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathon</forename><surname>Phillips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hyeonjoon</forename><surname>Moon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Syed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><forename type="middle">J</forename><surname>Rizvi</surname></persName>
		</author>
		<author>
			<persName><surname>Rauss</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Pattern Anal. Mach. Intell</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="1090" to="1104" />
			<date type="published" when="2000-10">October 2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">High-resolution stereo datasets with subpixel-accurate ground truth</title>
		<author>
			<persName><forename type="first">Heiko</forename><surname>Daniel Scharstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">York</forename><surname>Hirschmller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Greg</forename><surname>Kitajima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nera</forename><surname>Krathwohl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xi</forename><surname>Nesic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Porter</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><surname>Westling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>GCPR, eds., Xiaoyi Jiang, Joachim Hornegger, and Reinhard Koch</editor>
		<imprint>
			<biblScope unit="volume">8753</biblScope>
			<biblScope unit="page" from="31" to="42" />
			<date type="published" when="2014">2014</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Performance characterisation in computer vision: A guide to best practices</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Thacker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Barron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Beveridge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Courtney</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">R</forename><surname>Crum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ramesh</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Wiegers</surname></persName>
		</author>
		<title level="m">Peer Reviews in Software: A Practical Guide</title>
				<imprint>
			<publisher>Addison-Wesley</publisher>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">VITROvision-testing for robustness</title>
		<author>
			<persName><forename type="first">Oliver</forename><surname>Zendel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wolfgang</forename><surname>Herzner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Murschitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ERCIM News</title>
		<imprint>
			<biblScope unit="issue">97</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">CV-HAZOP: introducing test data validation for computer vision</title>
		<author>
			<persName><forename type="first">Oliver</forename><surname>Zendel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markus</forename><surname>Murschitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Humenberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wolfgang</forename><surname>Herzner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2015 IEEE International Conference on Computer Vision, ICCV 2015</title>
				<meeting><address><addrLine>Santiago, Chile</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">December 7-13, 2015. 2015</date>
			<biblScope unit="page" from="2066" to="2074" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
