Reducing Sequential Diagnosis Costs by Modifying Reiter’s Hitting Set Tree Patrick Rodler∗ and Manuel Herold Alpen-Adria Universität Klagenfurt e-mail: firstname.lastname@aau.at Extended Abstract ment costs). To combine the benefits of both strategies, we introduce When systems such as software, physical devices or know- a novel variant of Reiter’s hitting set algorithm that imple- ledge bases do not exhibit required properties, there is often ments this generalized sequential diagnosis process where a substantial number of competing fault explanations, cal- DPI-switches can take place anytime, while guaranteeing led diagnoses. Sequential Diagnosis aims at suggesting a completeness. Supporting our theoretical results, empirical minimal-cost sequence of measurements to identify the root examinations using real-world problems reveal that the new cause of a system failure, i.e. the diagnosis that pinpoints algorithm—even with a fairly simple DPI-switching stra- the actually faulty system components. Since the minimiza- tegy based on the search tree size—substantially reduces the tion of the overall measurement costs is NP-hard, sequential required effort to locate the actual diagnosis, compared to diagnosis methods rely on local optimizations to approxi- (sound and complete best-first) diagnoses computation al- mate the optimal sequence of measurements. To this end, gorithms that switch DPIs in every iteration. In concrete an iteratively (re-)computed set of (e.g., the most probable) figures, an average of 20% and up to 65% of the user in- diagnoses usually serves as a decision basis for the selection teraction cost to diagnose the system could be saved and of the best next measurement. The computation of diag- the new strategy led to equal or lower measurement costs noses is often accomplished by hitting set algorithms, due in 97% of the cases. Moreover, the proposed algorithm is to their favorable properties in terms of general applicabi- as generally applicable as Reiter’s method as well as fully lity, independence of specific theorem provers, and ability compatible and able to synergize with other measurement to generate diagnoses in best-first order. selection approaches or optimization heuristics such as en- In this work, we argue that sequential diagnosis can be tropy. interpreted in two natural ways, as a static (StatSD) or dy- namic (DynSD) problem, and that existing methods focus only on the latter. Both problem formulations assume a di- agnosis problem instance (DPI)—consisting of knowledge about the system, its components, and the so-far made ob- servations and measurements—being given as an input. The main difference between DynSD and StatSD is the way how new measurements are taken into account. In DynSD, they are directly used to extend (the measurements set of) the DPI, i.e., the relevant DPI is dynamically changing throug- hout the sequential process, each time shifting the focus to the diagnoses space of the extended DPI. StatSD, in con- trast, assumes the (initial) DPI is static and does not add new measurements to it. Instead, it uses the measurements as constraints to prune the, quasi “frozen”, search space of diagnoses for the fixed DPI. We prove that, under certain conditions, solving StatSD leads to a lower expected number of measurements com- pared to DynSD. In general, however, StatSD in incom- plete and gives no guarantee that the actual diagnosis will be found, as opposed to DynSD. Hence, DynSD and StatSD can be seen as two extremes of a generalized view on se- quential diagnosis; on the one extreme, new information is always used to switch to a new DPI (implying complete- ness), whereas, on the other extreme, new information is never used to switch to a new DPI (implying less measure- ∗ This work was supported by the Carinthian Science Fund (KWF), contract KWF-3520/26767/38701.